2026-03-30 00:00:09.295602 | Job console starting 2026-03-30 00:00:09.311560 | Updating git repos 2026-03-30 00:00:09.383070 | Cloning repos into workspace 2026-03-30 00:00:09.778928 | Restoring repo states 2026-03-30 00:00:09.826111 | Merging changes 2026-03-30 00:00:09.826132 | Checking out repos 2026-03-30 00:00:10.484289 | Preparing playbooks 2026-03-30 00:00:11.563153 | Running Ansible setup 2026-03-30 00:00:19.814360 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-30 00:00:21.976150 | 2026-03-30 00:00:21.976285 | PLAY [Base pre] 2026-03-30 00:00:22.010953 | 2026-03-30 00:00:22.011110 | TASK [Setup log path fact] 2026-03-30 00:00:22.050729 | orchestrator | ok 2026-03-30 00:00:22.082912 | 2026-03-30 00:00:22.083068 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-30 00:00:22.121714 | orchestrator | ok 2026-03-30 00:00:22.145703 | 2026-03-30 00:00:22.145816 | TASK [emit-job-header : Print job information] 2026-03-30 00:00:22.202551 | # Job Information 2026-03-30 00:00:22.202708 | Ansible Version: 2.16.14 2026-03-30 00:00:22.202743 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-30 00:00:22.202778 | Pipeline: periodic-midnight 2026-03-30 00:00:22.202801 | Executor: 521e9411259a 2026-03-30 00:00:22.202821 | Triggered by: https://github.com/osism/testbed 2026-03-30 00:00:22.202874 | Event ID: 4fbee65b2d564fd3998191c4a3b8d0f6 2026-03-30 00:00:22.214628 | 2026-03-30 00:00:22.214738 | LOOP [emit-job-header : Print node information] 2026-03-30 00:00:22.530251 | orchestrator | ok: 2026-03-30 00:00:22.530413 | orchestrator | # Node Information 2026-03-30 00:00:22.530446 | orchestrator | Inventory Hostname: orchestrator 2026-03-30 00:00:22.530471 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-30 00:00:22.530493 | orchestrator | Username: zuul-testbed05 2026-03-30 00:00:22.530515 | orchestrator | Distro: Debian 12.13 2026-03-30 00:00:22.530539 | orchestrator | Provider: static-testbed 2026-03-30 00:00:22.530560 | orchestrator | Region: 2026-03-30 00:00:22.530581 | orchestrator | Label: testbed-orchestrator 2026-03-30 00:00:22.530601 | orchestrator | Product Name: OpenStack Nova 2026-03-30 00:00:22.530620 | orchestrator | Interface IP: 81.163.193.140 2026-03-30 00:00:22.558488 | 2026-03-30 00:00:22.558600 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-30 00:00:23.660216 | orchestrator -> localhost | changed 2026-03-30 00:00:23.667970 | 2026-03-30 00:00:23.668094 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-30 00:00:26.506313 | orchestrator -> localhost | changed 2026-03-30 00:00:26.529193 | 2026-03-30 00:00:26.529306 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-30 00:00:27.263451 | orchestrator -> localhost | ok 2026-03-30 00:00:27.269234 | 2026-03-30 00:00:27.269328 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-30 00:00:27.307374 | orchestrator | ok 2026-03-30 00:00:27.383209 | orchestrator | included: /var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-30 00:00:27.443388 | 2026-03-30 00:00:27.443488 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-30 00:00:30.788327 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-30 00:00:30.788521 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/work/a0aa70b46ae349658f704d1d7df2bdbe_id_rsa 2026-03-30 00:00:30.788554 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/work/a0aa70b46ae349658f704d1d7df2bdbe_id_rsa.pub 2026-03-30 00:00:30.788575 | orchestrator -> localhost | The key fingerprint is: 2026-03-30 00:00:30.788596 | orchestrator -> localhost | SHA256:baTeG3w6KjponG2NANG+ns0rf95xOhYa3r+Xl4xlrjQ zuul-build-sshkey 2026-03-30 00:00:30.788614 | orchestrator -> localhost | The key's randomart image is: 2026-03-30 00:00:30.788641 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-30 00:00:30.788660 | orchestrator -> localhost | | . | 2026-03-30 00:00:30.788678 | orchestrator -> localhost | |. . | 2026-03-30 00:00:30.788694 | orchestrator -> localhost | | o . | 2026-03-30 00:00:30.788710 | orchestrator -> localhost | |. . + | 2026-03-30 00:00:30.788726 | orchestrator -> localhost | | . . S o | 2026-03-30 00:00:30.788745 | orchestrator -> localhost | | o ...+ o | 2026-03-30 00:00:30.788761 | orchestrator -> localhost | | o O + +o.= EB . | 2026-03-30 00:00:30.788776 | orchestrator -> localhost | | O B *.++.*+.= | 2026-03-30 00:00:30.788793 | orchestrator -> localhost | | . ++*.o++=+.o | 2026-03-30 00:00:30.788810 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-30 00:00:30.788852 | orchestrator -> localhost | ok: Runtime: 0:00:02.052153 2026-03-30 00:00:30.797752 | 2026-03-30 00:00:30.797840 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-30 00:00:30.845209 | orchestrator | ok 2026-03-30 00:00:30.862295 | orchestrator | included: /var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-30 00:00:30.888607 | 2026-03-30 00:00:30.888707 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-30 00:00:30.923723 | orchestrator | skipping: Conditional result was False 2026-03-30 00:00:30.930269 | 2026-03-30 00:00:30.930359 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-30 00:00:32.011278 | orchestrator | changed 2026-03-30 00:00:32.022496 | 2026-03-30 00:00:32.022587 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-30 00:00:32.333582 | orchestrator | ok 2026-03-30 00:00:32.338681 | 2026-03-30 00:00:32.340565 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-30 00:00:32.805152 | orchestrator | ok 2026-03-30 00:00:32.810408 | 2026-03-30 00:00:32.810494 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-30 00:00:33.270492 | orchestrator | ok 2026-03-30 00:00:33.275349 | 2026-03-30 00:00:33.275429 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-30 00:00:33.308010 | orchestrator | skipping: Conditional result was False 2026-03-30 00:00:33.313618 | 2026-03-30 00:00:33.313700 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-30 00:00:34.074756 | orchestrator -> localhost | changed 2026-03-30 00:00:34.096278 | 2026-03-30 00:00:34.096375 | TASK [add-build-sshkey : Add back temp key] 2026-03-30 00:00:35.275910 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/work/a0aa70b46ae349658f704d1d7df2bdbe_id_rsa (zuul-build-sshkey) 2026-03-30 00:00:35.276109 | orchestrator -> localhost | ok: Runtime: 0:00:00.029342 2026-03-30 00:00:35.282055 | 2026-03-30 00:00:35.282136 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-30 00:00:36.057595 | orchestrator | ok 2026-03-30 00:00:36.062585 | 2026-03-30 00:00:36.062673 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-30 00:00:36.125367 | orchestrator | skipping: Conditional result was False 2026-03-30 00:00:36.258239 | 2026-03-30 00:00:36.258337 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-30 00:00:36.786997 | orchestrator | ok 2026-03-30 00:00:36.816280 | 2026-03-30 00:00:36.816391 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-30 00:00:36.867938 | orchestrator | ok 2026-03-30 00:00:36.881639 | 2026-03-30 00:00:36.881739 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-30 00:00:37.716116 | orchestrator -> localhost | ok 2026-03-30 00:00:37.722236 | 2026-03-30 00:00:37.722322 | TASK [validate-host : Collect information about the host] 2026-03-30 00:00:39.660662 | orchestrator | ok 2026-03-30 00:00:39.682055 | 2026-03-30 00:00:39.682158 | TASK [validate-host : Sanitize hostname] 2026-03-30 00:00:39.793105 | orchestrator | ok 2026-03-30 00:00:39.799433 | 2026-03-30 00:00:39.799524 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-30 00:00:41.473915 | orchestrator -> localhost | changed 2026-03-30 00:00:41.479154 | 2026-03-30 00:00:41.479242 | TASK [validate-host : Collect information about zuul worker] 2026-03-30 00:00:42.118272 | orchestrator | ok 2026-03-30 00:00:42.122479 | 2026-03-30 00:00:42.122560 | TASK [validate-host : Write out all zuul information for each host] 2026-03-30 00:00:43.768199 | orchestrator -> localhost | changed 2026-03-30 00:00:43.776494 | 2026-03-30 00:00:43.776574 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-30 00:00:44.087295 | orchestrator | ok 2026-03-30 00:00:44.091985 | 2026-03-30 00:00:44.092082 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-30 00:02:05.025005 | orchestrator | changed: 2026-03-30 00:02:05.026029 | orchestrator | .d..t...... src/ 2026-03-30 00:02:05.026093 | orchestrator | .d..t...... src/github.com/ 2026-03-30 00:02:05.026122 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-30 00:02:05.026160 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-30 00:02:05.026184 | orchestrator | RedHat.yml 2026-03-30 00:02:05.042120 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-30 00:02:05.042138 | orchestrator | RedHat.yml 2026-03-30 00:02:05.042205 | orchestrator | = 1.53.0"... 2026-03-30 00:02:15.118815 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-30 00:02:15.255977 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-30 00:02:15.833331 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-30 00:02:15.895064 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-30 00:02:16.579221 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-30 00:02:16.642072 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-30 00:02:19.178945 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-30 00:02:19.179035 | orchestrator | 2026-03-30 00:02:19.179050 | orchestrator | Providers are signed by their developers. 2026-03-30 00:02:19.179062 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-30 00:02:19.179072 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-30 00:02:19.179099 | orchestrator | 2026-03-30 00:02:19.179111 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-30 00:02:19.179121 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-30 00:02:19.179148 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-30 00:02:19.179159 | orchestrator | you run "tofu init" in the future. 2026-03-30 00:02:19.179344 | orchestrator | 2026-03-30 00:02:19.179359 | orchestrator | OpenTofu has been successfully initialized! 2026-03-30 00:02:19.179380 | orchestrator | 2026-03-30 00:02:19.179398 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-30 00:02:19.179407 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-30 00:02:19.179418 | orchestrator | should now work. 2026-03-30 00:02:19.179428 | orchestrator | 2026-03-30 00:02:19.179438 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-30 00:02:19.179448 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-30 00:02:19.179459 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-30 00:02:19.355881 | orchestrator | Created and switched to workspace "ci"! 2026-03-30 00:02:19.355932 | orchestrator | 2026-03-30 00:02:19.355938 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-30 00:02:19.355944 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-30 00:02:19.355965 | orchestrator | for this configuration. 2026-03-30 00:02:19.466246 | orchestrator | ci.auto.tfvars 2026-03-30 00:02:20.357472 | orchestrator | default_custom.tf 2026-03-30 00:02:26.560161 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-30 00:02:27.091639 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-30 00:02:27.268981 | orchestrator | 2026-03-30 00:02:27.269150 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-30 00:02:27.269174 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-30 00:02:27.269187 | orchestrator | + create 2026-03-30 00:02:27.269199 | orchestrator | <= read (data resources) 2026-03-30 00:02:27.269433 | orchestrator | 2026-03-30 00:02:27.269454 | orchestrator | OpenTofu will perform the following actions: 2026-03-30 00:02:27.269692 | orchestrator | 2026-03-30 00:02:27.269712 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-30 00:02:27.269725 | orchestrator | # (config refers to values not yet known) 2026-03-30 00:02:27.274746 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-30 00:02:27.274813 | orchestrator | + checksum = (known after apply) 2026-03-30 00:02:27.274827 | orchestrator | + created_at = (known after apply) 2026-03-30 00:02:27.274840 | orchestrator | + file = (known after apply) 2026-03-30 00:02:27.274852 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.274894 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.274905 | orchestrator | + min_disk_gb = (known after apply) 2026-03-30 00:02:27.274915 | orchestrator | + min_ram_mb = (known after apply) 2026-03-30 00:02:27.274925 | orchestrator | + most_recent = true 2026-03-30 00:02:27.274936 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.274946 | orchestrator | + protected = (known after apply) 2026-03-30 00:02:27.274956 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.274970 | orchestrator | + schema = (known after apply) 2026-03-30 00:02:27.274980 | orchestrator | + size_bytes = (known after apply) 2026-03-30 00:02:27.274990 | orchestrator | + tags = (known after apply) 2026-03-30 00:02:27.274999 | orchestrator | + updated_at = (known after apply) 2026-03-30 00:02:27.275009 | orchestrator | } 2026-03-30 00:02:27.275019 | orchestrator | 2026-03-30 00:02:27.275030 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-30 00:02:27.275041 | orchestrator | # (config refers to values not yet known) 2026-03-30 00:02:27.275051 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-30 00:02:27.275062 | orchestrator | + checksum = (known after apply) 2026-03-30 00:02:27.275072 | orchestrator | + created_at = (known after apply) 2026-03-30 00:02:27.275081 | orchestrator | + file = (known after apply) 2026-03-30 00:02:27.275091 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.275100 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.275110 | orchestrator | + min_disk_gb = (known after apply) 2026-03-30 00:02:27.275119 | orchestrator | + min_ram_mb = (known after apply) 2026-03-30 00:02:27.275129 | orchestrator | + most_recent = true 2026-03-30 00:02:27.275139 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.275148 | orchestrator | + protected = (known after apply) 2026-03-30 00:02:27.275158 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.275167 | orchestrator | + schema = (known after apply) 2026-03-30 00:02:27.275177 | orchestrator | + size_bytes = (known after apply) 2026-03-30 00:02:27.275186 | orchestrator | + tags = (known after apply) 2026-03-30 00:02:27.275196 | orchestrator | + updated_at = (known after apply) 2026-03-30 00:02:27.275205 | orchestrator | } 2026-03-30 00:02:27.275215 | orchestrator | 2026-03-30 00:02:27.275225 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-30 00:02:27.275235 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-30 00:02:27.275245 | orchestrator | + content = (known after apply) 2026-03-30 00:02:27.275254 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-30 00:02:27.275264 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-30 00:02:27.275274 | orchestrator | + content_md5 = (known after apply) 2026-03-30 00:02:27.275283 | orchestrator | + content_sha1 = (known after apply) 2026-03-30 00:02:27.275293 | orchestrator | + content_sha256 = (known after apply) 2026-03-30 00:02:27.275302 | orchestrator | + content_sha512 = (known after apply) 2026-03-30 00:02:27.275312 | orchestrator | + directory_permission = "0777" 2026-03-30 00:02:27.275322 | orchestrator | + file_permission = "0644" 2026-03-30 00:02:27.275331 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-30 00:02:27.275341 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.275350 | orchestrator | } 2026-03-30 00:02:27.275411 | orchestrator | 2026-03-30 00:02:27.275423 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-30 00:02:27.275465 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-30 00:02:27.275475 | orchestrator | + content = (known after apply) 2026-03-30 00:02:27.275485 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-30 00:02:27.275495 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-30 00:02:27.275504 | orchestrator | + content_md5 = (known after apply) 2026-03-30 00:02:27.275514 | orchestrator | + content_sha1 = (known after apply) 2026-03-30 00:02:27.275523 | orchestrator | + content_sha256 = (known after apply) 2026-03-30 00:02:27.279168 | orchestrator | + content_sha512 = (known after apply) 2026-03-30 00:02:27.279238 | orchestrator | + directory_permission = "0777" 2026-03-30 00:02:27.279253 | orchestrator | + file_permission = "0644" 2026-03-30 00:02:27.279279 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-30 00:02:27.279286 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.279326 | orchestrator | } 2026-03-30 00:02:27.279333 | orchestrator | 2026-03-30 00:02:27.280067 | orchestrator | # local_file.inventory will be created 2026-03-30 00:02:27.280195 | orchestrator | + resource "local_file" "inventory" { 2026-03-30 00:02:27.280207 | orchestrator | + content = (known after apply) 2026-03-30 00:02:27.280216 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-30 00:02:27.280224 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-30 00:02:27.280230 | orchestrator | + content_md5 = (known after apply) 2026-03-30 00:02:27.280237 | orchestrator | + content_sha1 = (known after apply) 2026-03-30 00:02:27.280247 | orchestrator | + content_sha256 = (known after apply) 2026-03-30 00:02:27.280254 | orchestrator | + content_sha512 = (known after apply) 2026-03-30 00:02:27.280261 | orchestrator | + directory_permission = "0777" 2026-03-30 00:02:27.280268 | orchestrator | + file_permission = "0644" 2026-03-30 00:02:27.280275 | orchestrator | + filename = "inventory.ci" 2026-03-30 00:02:27.280282 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280288 | orchestrator | } 2026-03-30 00:02:27.280295 | orchestrator | 2026-03-30 00:02:27.280303 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-30 00:02:27.280311 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-30 00:02:27.280318 | orchestrator | + content = (sensitive value) 2026-03-30 00:02:27.280325 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-30 00:02:27.280331 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-30 00:02:27.280338 | orchestrator | + content_md5 = (known after apply) 2026-03-30 00:02:27.280344 | orchestrator | + content_sha1 = (known after apply) 2026-03-30 00:02:27.280351 | orchestrator | + content_sha256 = (known after apply) 2026-03-30 00:02:27.280358 | orchestrator | + content_sha512 = (known after apply) 2026-03-30 00:02:27.280365 | orchestrator | + directory_permission = "0700" 2026-03-30 00:02:27.280371 | orchestrator | + file_permission = "0600" 2026-03-30 00:02:27.280378 | orchestrator | + filename = ".id_rsa.ci" 2026-03-30 00:02:27.280385 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280391 | orchestrator | } 2026-03-30 00:02:27.280398 | orchestrator | 2026-03-30 00:02:27.280405 | orchestrator | # null_resource.node_semaphore will be created 2026-03-30 00:02:27.280412 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-30 00:02:27.280418 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280425 | orchestrator | } 2026-03-30 00:02:27.280432 | orchestrator | 2026-03-30 00:02:27.280439 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-30 00:02:27.280446 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-30 00:02:27.280453 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.280459 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.280466 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280473 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.280479 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.280486 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-30 00:02:27.280493 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.280499 | orchestrator | + size = 80 2026-03-30 00:02:27.280506 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.280512 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.280519 | orchestrator | } 2026-03-30 00:02:27.280525 | orchestrator | 2026-03-30 00:02:27.280532 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-30 00:02:27.280539 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-30 00:02:27.280545 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.280552 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.280559 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280600 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.280608 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.280615 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-30 00:02:27.280622 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.280629 | orchestrator | + size = 80 2026-03-30 00:02:27.280636 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.280642 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.280649 | orchestrator | } 2026-03-30 00:02:27.280656 | orchestrator | 2026-03-30 00:02:27.280662 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-30 00:02:27.280669 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-30 00:02:27.280676 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.280682 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.280689 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280696 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.280702 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.280709 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-30 00:02:27.280716 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.280722 | orchestrator | + size = 80 2026-03-30 00:02:27.280729 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.280736 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.280743 | orchestrator | } 2026-03-30 00:02:27.280749 | orchestrator | 2026-03-30 00:02:27.280756 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-30 00:02:27.280763 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-30 00:02:27.280770 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.280777 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.280798 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280806 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.280813 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.280819 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-30 00:02:27.280826 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.280833 | orchestrator | + size = 80 2026-03-30 00:02:27.280839 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.280846 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.280853 | orchestrator | } 2026-03-30 00:02:27.280859 | orchestrator | 2026-03-30 00:02:27.280866 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-30 00:02:27.280873 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-30 00:02:27.280880 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.280886 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.280893 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.280899 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.280906 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.280920 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-30 00:02:27.280928 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.280934 | orchestrator | + size = 80 2026-03-30 00:02:27.280941 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.280947 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.280954 | orchestrator | } 2026-03-30 00:02:27.280961 | orchestrator | 2026-03-30 00:02:27.280967 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-30 00:02:27.280974 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-30 00:02:27.280981 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.280988 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.280994 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281006 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.281013 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281020 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-30 00:02:27.281026 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281033 | orchestrator | + size = 80 2026-03-30 00:02:27.281040 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281046 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281053 | orchestrator | } 2026-03-30 00:02:27.281060 | orchestrator | 2026-03-30 00:02:27.281067 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-30 00:02:27.281073 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-30 00:02:27.281080 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281087 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281093 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281100 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.281107 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281113 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-30 00:02:27.281120 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281127 | orchestrator | + size = 80 2026-03-30 00:02:27.281133 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281140 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281147 | orchestrator | } 2026-03-30 00:02:27.281153 | orchestrator | 2026-03-30 00:02:27.281160 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-30 00:02:27.281167 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281174 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281181 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281188 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281194 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281201 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-30 00:02:27.281208 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281214 | orchestrator | + size = 20 2026-03-30 00:02:27.281222 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281228 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281235 | orchestrator | } 2026-03-30 00:02:27.281242 | orchestrator | 2026-03-30 00:02:27.281248 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-30 00:02:27.281255 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281262 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281269 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281275 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281282 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281289 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-30 00:02:27.281295 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281302 | orchestrator | + size = 20 2026-03-30 00:02:27.281308 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281315 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281322 | orchestrator | } 2026-03-30 00:02:27.281329 | orchestrator | 2026-03-30 00:02:27.281335 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-30 00:02:27.281342 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281349 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281356 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281362 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281369 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281376 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-30 00:02:27.281382 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281394 | orchestrator | + size = 20 2026-03-30 00:02:27.281401 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281407 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281414 | orchestrator | } 2026-03-30 00:02:27.281421 | orchestrator | 2026-03-30 00:02:27.281427 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-30 00:02:27.281434 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281441 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281447 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281460 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281467 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281474 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-30 00:02:27.281480 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281487 | orchestrator | + size = 20 2026-03-30 00:02:27.281494 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281500 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281507 | orchestrator | } 2026-03-30 00:02:27.281514 | orchestrator | 2026-03-30 00:02:27.281520 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-30 00:02:27.281527 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281534 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281541 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281548 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281554 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281561 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-30 00:02:27.281568 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281578 | orchestrator | + size = 20 2026-03-30 00:02:27.281607 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281614 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281621 | orchestrator | } 2026-03-30 00:02:27.281628 | orchestrator | 2026-03-30 00:02:27.281635 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-30 00:02:27.281642 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281648 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281655 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281662 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281669 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281675 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-30 00:02:27.281682 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281689 | orchestrator | + size = 20 2026-03-30 00:02:27.281695 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281702 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281709 | orchestrator | } 2026-03-30 00:02:27.281716 | orchestrator | 2026-03-30 00:02:27.281722 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-30 00:02:27.281729 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281736 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281743 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281750 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281757 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281763 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-30 00:02:27.281770 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281777 | orchestrator | + size = 20 2026-03-30 00:02:27.281783 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281790 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281797 | orchestrator | } 2026-03-30 00:02:27.281804 | orchestrator | 2026-03-30 00:02:27.281811 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-30 00:02:27.281817 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281830 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281837 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281843 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281850 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281857 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-30 00:02:27.281864 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281870 | orchestrator | + size = 20 2026-03-30 00:02:27.281877 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281884 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281891 | orchestrator | } 2026-03-30 00:02:27.281897 | orchestrator | 2026-03-30 00:02:27.281904 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-30 00:02:27.281911 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-30 00:02:27.281918 | orchestrator | + attachment = (known after apply) 2026-03-30 00:02:27.281925 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.281931 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.281938 | orchestrator | + metadata = (known after apply) 2026-03-30 00:02:27.281945 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-30 00:02:27.281951 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.281958 | orchestrator | + size = 20 2026-03-30 00:02:27.281965 | orchestrator | + volume_retype_policy = "never" 2026-03-30 00:02:27.281971 | orchestrator | + volume_type = "ssd" 2026-03-30 00:02:27.281978 | orchestrator | } 2026-03-30 00:02:27.281985 | orchestrator | 2026-03-30 00:02:27.281992 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-30 00:02:27.281999 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-30 00:02:27.282006 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.282039 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.282049 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.282055 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.282062 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.282069 | orchestrator | + config_drive = true 2026-03-30 00:02:27.282075 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.282082 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.282088 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-30 00:02:27.282095 | orchestrator | + force_delete = false 2026-03-30 00:02:27.282102 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.282109 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.282115 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.282122 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.282128 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.282135 | orchestrator | + name = "testbed-manager" 2026-03-30 00:02:27.282142 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.282148 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.282155 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.282161 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.282173 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.282180 | orchestrator | + user_data = (sensitive value) 2026-03-30 00:02:27.282187 | orchestrator | 2026-03-30 00:02:27.282235 | orchestrator | + block_device { 2026-03-30 00:02:27.282243 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.282250 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.282260 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.282268 | orchestrator | + multiattach = false 2026-03-30 00:02:27.282309 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.282316 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.282328 | orchestrator | } 2026-03-30 00:02:27.282398 | orchestrator | 2026-03-30 00:02:27.282408 | orchestrator | + network { 2026-03-30 00:02:27.282415 | orchestrator | + access_network = false 2026-03-30 00:02:27.282422 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.282429 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.282435 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.282442 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.282449 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.282456 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.282462 | orchestrator | } 2026-03-30 00:02:27.282469 | orchestrator | } 2026-03-30 00:02:27.282475 | orchestrator | 2026-03-30 00:02:27.282482 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-30 00:02:27.282489 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-30 00:02:27.282496 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.282502 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.282509 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.282516 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.282522 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.282529 | orchestrator | + config_drive = true 2026-03-30 00:02:27.282536 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.282542 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.282549 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-30 00:02:27.282555 | orchestrator | + force_delete = false 2026-03-30 00:02:27.282562 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.282569 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.282575 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.282625 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.282633 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.282640 | orchestrator | + name = "testbed-node-0" 2026-03-30 00:02:27.282646 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.282653 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.282660 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.282666 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.282673 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.282679 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-30 00:02:27.282686 | orchestrator | 2026-03-30 00:02:27.282693 | orchestrator | + block_device { 2026-03-30 00:02:27.282700 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.282707 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.282713 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.282720 | orchestrator | + multiattach = false 2026-03-30 00:02:27.282727 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.282733 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.282740 | orchestrator | } 2026-03-30 00:02:27.282747 | orchestrator | 2026-03-30 00:02:27.282753 | orchestrator | + network { 2026-03-30 00:02:27.282761 | orchestrator | + access_network = false 2026-03-30 00:02:27.282767 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.282775 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.282781 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.282788 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.282794 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.282801 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.282808 | orchestrator | } 2026-03-30 00:02:27.282814 | orchestrator | } 2026-03-30 00:02:27.282821 | orchestrator | 2026-03-30 00:02:27.282828 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-30 00:02:27.282834 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-30 00:02:27.282841 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.282855 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.282862 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.282868 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.282875 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.282881 | orchestrator | + config_drive = true 2026-03-30 00:02:27.282888 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.282895 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.282901 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-30 00:02:27.282908 | orchestrator | + force_delete = false 2026-03-30 00:02:27.282914 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.282921 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.282928 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.282934 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.282941 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.282948 | orchestrator | + name = "testbed-node-1" 2026-03-30 00:02:27.282954 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.282961 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.282967 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.282974 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.282981 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.282987 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-30 00:02:27.282994 | orchestrator | 2026-03-30 00:02:27.283001 | orchestrator | + block_device { 2026-03-30 00:02:27.283008 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.283014 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.283021 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.283028 | orchestrator | + multiattach = false 2026-03-30 00:02:27.283034 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.283041 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.283048 | orchestrator | } 2026-03-30 00:02:27.283054 | orchestrator | 2026-03-30 00:02:27.283061 | orchestrator | + network { 2026-03-30 00:02:27.283075 | orchestrator | + access_network = false 2026-03-30 00:02:27.283082 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.283089 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.283096 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.283102 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.283109 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.283116 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.283123 | orchestrator | } 2026-03-30 00:02:27.283130 | orchestrator | } 2026-03-30 00:02:27.283136 | orchestrator | 2026-03-30 00:02:27.283143 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-30 00:02:27.283150 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-30 00:02:27.283156 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.283163 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.283170 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.283177 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.283188 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.283195 | orchestrator | + config_drive = true 2026-03-30 00:02:27.283202 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.283209 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.283216 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-30 00:02:27.283222 | orchestrator | + force_delete = false 2026-03-30 00:02:27.283229 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.283235 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.283242 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.283253 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.283260 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.283266 | orchestrator | + name = "testbed-node-2" 2026-03-30 00:02:27.283273 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.283280 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.283286 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.283293 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.283300 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.283306 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-30 00:02:27.283313 | orchestrator | 2026-03-30 00:02:27.283320 | orchestrator | + block_device { 2026-03-30 00:02:27.283327 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.283333 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.283340 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.283346 | orchestrator | + multiattach = false 2026-03-30 00:02:27.283353 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.283359 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.283366 | orchestrator | } 2026-03-30 00:02:27.283373 | orchestrator | 2026-03-30 00:02:27.283379 | orchestrator | + network { 2026-03-30 00:02:27.283386 | orchestrator | + access_network = false 2026-03-30 00:02:27.283393 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.283399 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.283406 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.283413 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.283419 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.283426 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.283433 | orchestrator | } 2026-03-30 00:02:27.283439 | orchestrator | } 2026-03-30 00:02:27.283446 | orchestrator | 2026-03-30 00:02:27.283453 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-30 00:02:27.283460 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-30 00:02:27.283466 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.283473 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.283480 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.283486 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.283493 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.283499 | orchestrator | + config_drive = true 2026-03-30 00:02:27.283506 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.283512 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.283519 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-30 00:02:27.283526 | orchestrator | + force_delete = false 2026-03-30 00:02:27.283532 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.283539 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.283545 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.283552 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.283559 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.283566 | orchestrator | + name = "testbed-node-3" 2026-03-30 00:02:27.283572 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.283579 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.283602 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.283609 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.283616 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.283623 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-30 00:02:27.283629 | orchestrator | 2026-03-30 00:02:27.283636 | orchestrator | + block_device { 2026-03-30 00:02:27.283646 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.283654 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.283660 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.283680 | orchestrator | + multiattach = false 2026-03-30 00:02:27.283687 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.283694 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.283700 | orchestrator | } 2026-03-30 00:02:27.283707 | orchestrator | 2026-03-30 00:02:27.283714 | orchestrator | + network { 2026-03-30 00:02:27.283720 | orchestrator | + access_network = false 2026-03-30 00:02:27.283727 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.283734 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.283740 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.283747 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.283754 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.283761 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.283767 | orchestrator | } 2026-03-30 00:02:27.283774 | orchestrator | } 2026-03-30 00:02:27.283781 | orchestrator | 2026-03-30 00:02:27.283787 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-30 00:02:27.283799 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-30 00:02:27.283807 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.283814 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.283821 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.283827 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.283834 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.283840 | orchestrator | + config_drive = true 2026-03-30 00:02:27.283847 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.283854 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.283860 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-30 00:02:27.283867 | orchestrator | + force_delete = false 2026-03-30 00:02:27.283873 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.283880 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.283887 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.283894 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.283900 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.283907 | orchestrator | + name = "testbed-node-4" 2026-03-30 00:02:27.283913 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.283920 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.283927 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.283934 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.283940 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.283947 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-30 00:02:27.283954 | orchestrator | 2026-03-30 00:02:27.283960 | orchestrator | + block_device { 2026-03-30 00:02:27.283967 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.283974 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.283981 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.283987 | orchestrator | + multiattach = false 2026-03-30 00:02:27.283994 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.284001 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.284007 | orchestrator | } 2026-03-30 00:02:27.284014 | orchestrator | 2026-03-30 00:02:27.284021 | orchestrator | + network { 2026-03-30 00:02:27.284027 | orchestrator | + access_network = false 2026-03-30 00:02:27.284034 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.284040 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.284047 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.284054 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.284060 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.284067 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.284074 | orchestrator | } 2026-03-30 00:02:27.284080 | orchestrator | } 2026-03-30 00:02:27.284091 | orchestrator | 2026-03-30 00:02:27.284098 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-30 00:02:27.284105 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-30 00:02:27.284112 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-30 00:02:27.284118 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-30 00:02:27.284125 | orchestrator | + all_metadata = (known after apply) 2026-03-30 00:02:27.284131 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.284138 | orchestrator | + availability_zone = "nova" 2026-03-30 00:02:27.284145 | orchestrator | + config_drive = true 2026-03-30 00:02:27.284151 | orchestrator | + created = (known after apply) 2026-03-30 00:02:27.284158 | orchestrator | + flavor_id = (known after apply) 2026-03-30 00:02:27.284165 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-30 00:02:27.284171 | orchestrator | + force_delete = false 2026-03-30 00:02:27.284181 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-30 00:02:27.284188 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284195 | orchestrator | + image_id = (known after apply) 2026-03-30 00:02:27.284202 | orchestrator | + image_name = (known after apply) 2026-03-30 00:02:27.284208 | orchestrator | + key_pair = "testbed" 2026-03-30 00:02:27.284215 | orchestrator | + name = "testbed-node-5" 2026-03-30 00:02:27.284221 | orchestrator | + power_state = "active" 2026-03-30 00:02:27.284228 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284235 | orchestrator | + security_groups = (known after apply) 2026-03-30 00:02:27.284241 | orchestrator | + stop_before_destroy = false 2026-03-30 00:02:27.284248 | orchestrator | + updated = (known after apply) 2026-03-30 00:02:27.284255 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-30 00:02:27.284261 | orchestrator | 2026-03-30 00:02:27.284268 | orchestrator | + block_device { 2026-03-30 00:02:27.284274 | orchestrator | + boot_index = 0 2026-03-30 00:02:27.284281 | orchestrator | + delete_on_termination = false 2026-03-30 00:02:27.284288 | orchestrator | + destination_type = "volume" 2026-03-30 00:02:27.284294 | orchestrator | + multiattach = false 2026-03-30 00:02:27.284301 | orchestrator | + source_type = "volume" 2026-03-30 00:02:27.284308 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.284355 | orchestrator | } 2026-03-30 00:02:27.284363 | orchestrator | 2026-03-30 00:02:27.284370 | orchestrator | + network { 2026-03-30 00:02:27.284377 | orchestrator | + access_network = false 2026-03-30 00:02:27.284383 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-30 00:02:27.284390 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-30 00:02:27.284397 | orchestrator | + mac = (known after apply) 2026-03-30 00:02:27.284403 | orchestrator | + name = (known after apply) 2026-03-30 00:02:27.284410 | orchestrator | + port = (known after apply) 2026-03-30 00:02:27.284416 | orchestrator | + uuid = (known after apply) 2026-03-30 00:02:27.284423 | orchestrator | } 2026-03-30 00:02:27.284430 | orchestrator | } 2026-03-30 00:02:27.284436 | orchestrator | 2026-03-30 00:02:27.284443 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-30 00:02:27.284450 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-30 00:02:27.284457 | orchestrator | + fingerprint = (known after apply) 2026-03-30 00:02:27.284463 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284470 | orchestrator | + name = "testbed" 2026-03-30 00:02:27.284476 | orchestrator | + private_key = (sensitive value) 2026-03-30 00:02:27.284483 | orchestrator | + public_key = (known after apply) 2026-03-30 00:02:27.284490 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284496 | orchestrator | + user_id = (known after apply) 2026-03-30 00:02:27.284503 | orchestrator | } 2026-03-30 00:02:27.284510 | orchestrator | 2026-03-30 00:02:27.284516 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-30 00:02:27.284523 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284540 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284547 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284554 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284561 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284568 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284574 | orchestrator | } 2026-03-30 00:02:27.284600 | orchestrator | 2026-03-30 00:02:27.284607 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-30 00:02:27.284614 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284621 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284627 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284634 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284641 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284647 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284654 | orchestrator | } 2026-03-30 00:02:27.284661 | orchestrator | 2026-03-30 00:02:27.284667 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-30 00:02:27.284674 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284681 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284688 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284694 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284701 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284708 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284714 | orchestrator | } 2026-03-30 00:02:27.284721 | orchestrator | 2026-03-30 00:02:27.284728 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-30 00:02:27.284735 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284742 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284748 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284755 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284762 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284768 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284775 | orchestrator | } 2026-03-30 00:02:27.284782 | orchestrator | 2026-03-30 00:02:27.284788 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-30 00:02:27.284795 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284802 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284809 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284815 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284826 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284833 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284839 | orchestrator | } 2026-03-30 00:02:27.284846 | orchestrator | 2026-03-30 00:02:27.284853 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-30 00:02:27.284860 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284866 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284873 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284880 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284886 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284893 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284900 | orchestrator | } 2026-03-30 00:02:27.284907 | orchestrator | 2026-03-30 00:02:27.284914 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-30 00:02:27.284920 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.284927 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.284934 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.284940 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.284947 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.284959 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.284983 | orchestrator | } 2026-03-30 00:02:27.284990 | orchestrator | 2026-03-30 00:02:27.284997 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-30 00:02:27.285004 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.285011 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.285018 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285025 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.285031 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285038 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.285045 | orchestrator | } 2026-03-30 00:02:27.285052 | orchestrator | 2026-03-30 00:02:27.285059 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-30 00:02:27.285066 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-30 00:02:27.285073 | orchestrator | + device = (known after apply) 2026-03-30 00:02:27.285080 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285101 | orchestrator | + instance_id = (known after apply) 2026-03-30 00:02:27.285108 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285114 | orchestrator | + volume_id = (known after apply) 2026-03-30 00:02:27.285121 | orchestrator | } 2026-03-30 00:02:27.285128 | orchestrator | 2026-03-30 00:02:27.285135 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-30 00:02:27.285143 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-30 00:02:27.285150 | orchestrator | + fixed_ip = (known after apply) 2026-03-30 00:02:27.285157 | orchestrator | + floating_ip = (known after apply) 2026-03-30 00:02:27.285164 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285170 | orchestrator | + port_id = (known after apply) 2026-03-30 00:02:27.285177 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285184 | orchestrator | } 2026-03-30 00:02:27.285191 | orchestrator | 2026-03-30 00:02:27.285198 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-30 00:02:27.285205 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-30 00:02:27.285212 | orchestrator | + address = (known after apply) 2026-03-30 00:02:27.285218 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.285225 | orchestrator | + dns_domain = (known after apply) 2026-03-30 00:02:27.285232 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.285239 | orchestrator | + fixed_ip = (known after apply) 2026-03-30 00:02:27.285246 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285252 | orchestrator | + pool = "public" 2026-03-30 00:02:27.285259 | orchestrator | + port_id = (known after apply) 2026-03-30 00:02:27.285270 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285278 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.285284 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.285291 | orchestrator | } 2026-03-30 00:02:27.285298 | orchestrator | 2026-03-30 00:02:27.285305 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-30 00:02:27.285312 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-30 00:02:27.285318 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.285325 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.285332 | orchestrator | + availability_zone_hints = [ 2026-03-30 00:02:27.285339 | orchestrator | + "nova", 2026-03-30 00:02:27.285346 | orchestrator | ] 2026-03-30 00:02:27.285352 | orchestrator | + dns_domain = (known after apply) 2026-03-30 00:02:27.285359 | orchestrator | + external = (known after apply) 2026-03-30 00:02:27.285366 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285373 | orchestrator | + mtu = (known after apply) 2026-03-30 00:02:27.285379 | orchestrator | + name = "net-testbed-management" 2026-03-30 00:02:27.285386 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.285398 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.285405 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285412 | orchestrator | + shared = (known after apply) 2026-03-30 00:02:27.285419 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.285425 | orchestrator | + transparent_vlan = (known after apply) 2026-03-30 00:02:27.285432 | orchestrator | 2026-03-30 00:02:27.285439 | orchestrator | + segments (known after apply) 2026-03-30 00:02:27.285446 | orchestrator | } 2026-03-30 00:02:27.285453 | orchestrator | 2026-03-30 00:02:27.285460 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-30 00:02:27.285467 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-30 00:02:27.285473 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.285480 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.285487 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.285498 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.285505 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.285512 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.285518 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.285525 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.285532 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285538 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.285545 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.285552 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.285559 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.285566 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285573 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.285579 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.285601 | orchestrator | 2026-03-30 00:02:27.285608 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.285615 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.285621 | orchestrator | } 2026-03-30 00:02:27.285628 | orchestrator | 2026-03-30 00:02:27.285635 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.285642 | orchestrator | 2026-03-30 00:02:27.285649 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.285655 | orchestrator | + ip_address = "192.168.16.5" 2026-03-30 00:02:27.285662 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.285669 | orchestrator | } 2026-03-30 00:02:27.285676 | orchestrator | } 2026-03-30 00:02:27.285683 | orchestrator | 2026-03-30 00:02:27.285690 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-30 00:02:27.285696 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-30 00:02:27.285703 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.285710 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.285717 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.285724 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.285730 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.285737 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.285744 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.285750 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.285757 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.285764 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.285770 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.285777 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.285784 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.285790 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.285802 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.285808 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.285815 | orchestrator | 2026-03-30 00:02:27.285822 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.285829 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-30 00:02:27.285836 | orchestrator | } 2026-03-30 00:02:27.285842 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.285849 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.285856 | orchestrator | } 2026-03-30 00:02:27.285862 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.285869 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-30 00:02:27.285876 | orchestrator | } 2026-03-30 00:02:27.285882 | orchestrator | 2026-03-30 00:02:27.285889 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.285896 | orchestrator | 2026-03-30 00:02:27.285903 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.285910 | orchestrator | + ip_address = "192.168.16.10" 2026-03-30 00:02:27.285916 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.285923 | orchestrator | } 2026-03-30 00:02:27.285930 | orchestrator | } 2026-03-30 00:02:27.285936 | orchestrator | 2026-03-30 00:02:27.285943 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-30 00:02:27.285950 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-30 00:02:27.285956 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.285963 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.285971 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.285978 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.285990 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.285998 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.286004 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.286027 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.286036 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.286042 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.286049 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.286056 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.286062 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.286069 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.286076 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.286082 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.286089 | orchestrator | 2026-03-30 00:02:27.286096 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286103 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-30 00:02:27.286109 | orchestrator | } 2026-03-30 00:02:27.286116 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286123 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.286129 | orchestrator | } 2026-03-30 00:02:27.286136 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286143 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-30 00:02:27.286150 | orchestrator | } 2026-03-30 00:02:27.286156 | orchestrator | 2026-03-30 00:02:27.286163 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.286169 | orchestrator | 2026-03-30 00:02:27.286176 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.286183 | orchestrator | + ip_address = "192.168.16.11" 2026-03-30 00:02:27.286190 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.286196 | orchestrator | } 2026-03-30 00:02:27.286203 | orchestrator | } 2026-03-30 00:02:27.286210 | orchestrator | 2026-03-30 00:02:27.286216 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-30 00:02:27.286223 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-30 00:02:27.286230 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.286236 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.286243 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.286250 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.286261 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.286268 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.286275 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.286281 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.286292 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.286299 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.286305 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.286312 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.286319 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.286325 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.286332 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.286339 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.286345 | orchestrator | 2026-03-30 00:02:27.286352 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286359 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-30 00:02:27.286365 | orchestrator | } 2026-03-30 00:02:27.286372 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286379 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.286385 | orchestrator | } 2026-03-30 00:02:27.286392 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286399 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-30 00:02:27.286405 | orchestrator | } 2026-03-30 00:02:27.286412 | orchestrator | 2026-03-30 00:02:27.286419 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.286425 | orchestrator | 2026-03-30 00:02:27.286432 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.286439 | orchestrator | + ip_address = "192.168.16.12" 2026-03-30 00:02:27.286446 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.286452 | orchestrator | } 2026-03-30 00:02:27.286459 | orchestrator | } 2026-03-30 00:02:27.286466 | orchestrator | 2026-03-30 00:02:27.286472 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-30 00:02:27.286479 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-30 00:02:27.286486 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.286492 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.286499 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.286507 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.286513 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.286520 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.286527 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.286533 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.286540 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.286546 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.286553 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.286560 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.286566 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.286573 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.286613 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.286622 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.286629 | orchestrator | 2026-03-30 00:02:27.286635 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286642 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-30 00:02:27.286649 | orchestrator | } 2026-03-30 00:02:27.286656 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286662 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.286669 | orchestrator | } 2026-03-30 00:02:27.286676 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286683 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-30 00:02:27.286689 | orchestrator | } 2026-03-30 00:02:27.286696 | orchestrator | 2026-03-30 00:02:27.286708 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.286715 | orchestrator | 2026-03-30 00:02:27.286722 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.286729 | orchestrator | + ip_address = "192.168.16.13" 2026-03-30 00:02:27.286735 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.286742 | orchestrator | } 2026-03-30 00:02:27.286748 | orchestrator | } 2026-03-30 00:02:27.286755 | orchestrator | 2026-03-30 00:02:27.286762 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-30 00:02:27.286769 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-30 00:02:27.286775 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.286782 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.286793 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.286800 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.286807 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.286814 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.286820 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.286827 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.286834 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.286840 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.286847 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.286854 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.286860 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.286867 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.286874 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.286880 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.286888 | orchestrator | 2026-03-30 00:02:27.286894 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286901 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-30 00:02:27.286908 | orchestrator | } 2026-03-30 00:02:27.286915 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286921 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.286928 | orchestrator | } 2026-03-30 00:02:27.286935 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.286941 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-30 00:02:27.286948 | orchestrator | } 2026-03-30 00:02:27.286955 | orchestrator | 2026-03-30 00:02:27.286962 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.286968 | orchestrator | 2026-03-30 00:02:27.286976 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.286987 | orchestrator | + ip_address = "192.168.16.14" 2026-03-30 00:02:27.286998 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.287007 | orchestrator | } 2026-03-30 00:02:27.287014 | orchestrator | } 2026-03-30 00:02:27.287021 | orchestrator | 2026-03-30 00:02:27.287027 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-30 00:02:27.287034 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-30 00:02:27.287041 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.287048 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-30 00:02:27.287054 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-30 00:02:27.287061 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.287068 | orchestrator | + device_id = (known after apply) 2026-03-30 00:02:27.287074 | orchestrator | + device_owner = (known after apply) 2026-03-30 00:02:27.287081 | orchestrator | + dns_assignment = (known after apply) 2026-03-30 00:02:27.287087 | orchestrator | + dns_name = (known after apply) 2026-03-30 00:02:27.287094 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287101 | orchestrator | + mac_address = (known after apply) 2026-03-30 00:02:27.287107 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.287114 | orchestrator | + port_security_enabled = (known after apply) 2026-03-30 00:02:27.287120 | orchestrator | + qos_policy_id = (known after apply) 2026-03-30 00:02:27.287132 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287139 | orchestrator | + security_group_ids = (known after apply) 2026-03-30 00:02:27.287145 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.287152 | orchestrator | 2026-03-30 00:02:27.287159 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.287165 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-30 00:02:27.287172 | orchestrator | } 2026-03-30 00:02:27.287179 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.287186 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-30 00:02:27.287192 | orchestrator | } 2026-03-30 00:02:27.287199 | orchestrator | + allowed_address_pairs { 2026-03-30 00:02:27.287206 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-30 00:02:27.287212 | orchestrator | } 2026-03-30 00:02:27.287219 | orchestrator | 2026-03-30 00:02:27.287230 | orchestrator | + binding (known after apply) 2026-03-30 00:02:27.287237 | orchestrator | 2026-03-30 00:02:27.287244 | orchestrator | + fixed_ip { 2026-03-30 00:02:27.287251 | orchestrator | + ip_address = "192.168.16.15" 2026-03-30 00:02:27.287258 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.287265 | orchestrator | } 2026-03-30 00:02:27.287271 | orchestrator | } 2026-03-30 00:02:27.287278 | orchestrator | 2026-03-30 00:02:27.287285 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-30 00:02:27.287291 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-30 00:02:27.287298 | orchestrator | + force_destroy = false 2026-03-30 00:02:27.287305 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287311 | orchestrator | + port_id = (known after apply) 2026-03-30 00:02:27.287318 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287325 | orchestrator | + router_id = (known after apply) 2026-03-30 00:02:27.287331 | orchestrator | + subnet_id = (known after apply) 2026-03-30 00:02:27.287338 | orchestrator | } 2026-03-30 00:02:27.287345 | orchestrator | 2026-03-30 00:02:27.287351 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-30 00:02:27.287358 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-30 00:02:27.287365 | orchestrator | + admin_state_up = (known after apply) 2026-03-30 00:02:27.287372 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.287379 | orchestrator | + availability_zone_hints = [ 2026-03-30 00:02:27.287386 | orchestrator | + "nova", 2026-03-30 00:02:27.287392 | orchestrator | ] 2026-03-30 00:02:27.287399 | orchestrator | + distributed = (known after apply) 2026-03-30 00:02:27.287405 | orchestrator | + enable_snat = (known after apply) 2026-03-30 00:02:27.287412 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-30 00:02:27.287419 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-30 00:02:27.287426 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287432 | orchestrator | + name = "testbed" 2026-03-30 00:02:27.287439 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287446 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.287453 | orchestrator | 2026-03-30 00:02:27.287459 | orchestrator | + external_fixed_ip (known after apply) 2026-03-30 00:02:27.287466 | orchestrator | } 2026-03-30 00:02:27.287473 | orchestrator | 2026-03-30 00:02:27.287480 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-30 00:02:27.287487 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-30 00:02:27.287493 | orchestrator | + description = "ssh" 2026-03-30 00:02:27.287500 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.287507 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.287519 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287526 | orchestrator | + port_range_max = 22 2026-03-30 00:02:27.287533 | orchestrator | + port_range_min = 22 2026-03-30 00:02:27.287539 | orchestrator | + protocol = "tcp" 2026-03-30 00:02:27.287546 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287557 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.287564 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.287571 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.287578 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.287646 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.287654 | orchestrator | } 2026-03-30 00:02:27.287660 | orchestrator | 2026-03-30 00:02:27.287667 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-30 00:02:27.287674 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-30 00:02:27.287681 | orchestrator | + description = "wireguard" 2026-03-30 00:02:27.287688 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.287695 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.287701 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287708 | orchestrator | + port_range_max = 51820 2026-03-30 00:02:27.287715 | orchestrator | + port_range_min = 51820 2026-03-30 00:02:27.287721 | orchestrator | + protocol = "udp" 2026-03-30 00:02:27.287728 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287735 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.287741 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.287748 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.287755 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.287762 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.287769 | orchestrator | } 2026-03-30 00:02:27.287775 | orchestrator | 2026-03-30 00:02:27.287782 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-30 00:02:27.287789 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-30 00:02:27.287796 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.287803 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.287810 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287816 | orchestrator | + protocol = "tcp" 2026-03-30 00:02:27.287823 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287830 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.287836 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.287843 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-30 00:02:27.287850 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.287857 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.287864 | orchestrator | } 2026-03-30 00:02:27.287871 | orchestrator | 2026-03-30 00:02:27.287877 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-30 00:02:27.287885 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-30 00:02:27.287891 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.287898 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.287905 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.287911 | orchestrator | + protocol = "udp" 2026-03-30 00:02:27.287918 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.287925 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.287932 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.287938 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-30 00:02:27.287945 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.287952 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.287958 | orchestrator | } 2026-03-30 00:02:27.287965 | orchestrator | 2026-03-30 00:02:27.287972 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-30 00:02:27.287987 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-30 00:02:27.287994 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.288001 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.288008 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288014 | orchestrator | + protocol = "icmp" 2026-03-30 00:02:27.288021 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288028 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.288035 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.288041 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.288048 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.288055 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288062 | orchestrator | } 2026-03-30 00:02:27.288068 | orchestrator | 2026-03-30 00:02:27.288075 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-30 00:02:27.288082 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-30 00:02:27.288089 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.288096 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.288102 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288109 | orchestrator | + protocol = "tcp" 2026-03-30 00:02:27.288116 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288123 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.288133 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.288140 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.288147 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.288153 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288160 | orchestrator | } 2026-03-30 00:02:27.288167 | orchestrator | 2026-03-30 00:02:27.288174 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-30 00:02:27.288186 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-30 00:02:27.288193 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.288200 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.288207 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288213 | orchestrator | + protocol = "udp" 2026-03-30 00:02:27.288220 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288227 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.288233 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.288240 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.288246 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.288253 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288260 | orchestrator | } 2026-03-30 00:02:27.288266 | orchestrator | 2026-03-30 00:02:27.288273 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-30 00:02:27.288280 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-30 00:02:27.288287 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.288297 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.288304 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288311 | orchestrator | + protocol = "icmp" 2026-03-30 00:02:27.288318 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288325 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.288332 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.288338 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.288345 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.288352 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288364 | orchestrator | } 2026-03-30 00:02:27.288371 | orchestrator | 2026-03-30 00:02:27.288377 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-30 00:02:27.288384 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-30 00:02:27.288391 | orchestrator | + description = "vrrp" 2026-03-30 00:02:27.288397 | orchestrator | + direction = "ingress" 2026-03-30 00:02:27.288404 | orchestrator | + ethertype = "IPv4" 2026-03-30 00:02:27.288411 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288418 | orchestrator | + protocol = "112" 2026-03-30 00:02:27.288424 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288431 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-30 00:02:27.288438 | orchestrator | + remote_group_id = (known after apply) 2026-03-30 00:02:27.288445 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-30 00:02:27.288452 | orchestrator | + security_group_id = (known after apply) 2026-03-30 00:02:27.288458 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288465 | orchestrator | } 2026-03-30 00:02:27.288472 | orchestrator | 2026-03-30 00:02:27.288479 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-30 00:02:27.288485 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-30 00:02:27.288492 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.288499 | orchestrator | + description = "management security group" 2026-03-30 00:02:27.288506 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288512 | orchestrator | + name = "testbed-management" 2026-03-30 00:02:27.288519 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288525 | orchestrator | + stateful = (known after apply) 2026-03-30 00:02:27.288532 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288539 | orchestrator | } 2026-03-30 00:02:27.288546 | orchestrator | 2026-03-30 00:02:27.288553 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-30 00:02:27.288560 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-30 00:02:27.288566 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.288573 | orchestrator | + description = "node security group" 2026-03-30 00:02:27.288611 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288619 | orchestrator | + name = "testbed-node" 2026-03-30 00:02:27.288626 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288632 | orchestrator | + stateful = (known after apply) 2026-03-30 00:02:27.288639 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288646 | orchestrator | } 2026-03-30 00:02:27.288653 | orchestrator | 2026-03-30 00:02:27.288659 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-30 00:02:27.288666 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-30 00:02:27.288674 | orchestrator | + all_tags = (known after apply) 2026-03-30 00:02:27.288681 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-30 00:02:27.288687 | orchestrator | + dns_nameservers = [ 2026-03-30 00:02:27.288694 | orchestrator | + "8.8.8.8", 2026-03-30 00:02:27.288701 | orchestrator | + "9.9.9.9", 2026-03-30 00:02:27.288708 | orchestrator | ] 2026-03-30 00:02:27.288714 | orchestrator | + enable_dhcp = true 2026-03-30 00:02:27.288721 | orchestrator | + gateway_ip = (known after apply) 2026-03-30 00:02:27.288728 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288735 | orchestrator | + ip_version = 4 2026-03-30 00:02:27.288742 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-30 00:02:27.288749 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-30 00:02:27.288756 | orchestrator | + name = "subnet-testbed-management" 2026-03-30 00:02:27.288762 | orchestrator | + network_id = (known after apply) 2026-03-30 00:02:27.288769 | orchestrator | + no_gateway = false 2026-03-30 00:02:27.288776 | orchestrator | + region = (known after apply) 2026-03-30 00:02:27.288783 | orchestrator | + service_types = (known after apply) 2026-03-30 00:02:27.288797 | orchestrator | + tenant_id = (known after apply) 2026-03-30 00:02:27.288804 | orchestrator | 2026-03-30 00:02:27.288811 | orchestrator | + allocation_pool { 2026-03-30 00:02:27.288818 | orchestrator | + end = "192.168.31.250" 2026-03-30 00:02:27.288825 | orchestrator | + start = "192.168.31.200" 2026-03-30 00:02:27.288832 | orchestrator | } 2026-03-30 00:02:27.288838 | orchestrator | } 2026-03-30 00:02:27.288845 | orchestrator | 2026-03-30 00:02:27.288852 | orchestrator | # terraform_data.image will be created 2026-03-30 00:02:27.288859 | orchestrator | + resource "terraform_data" "image" { 2026-03-30 00:02:27.288865 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288872 | orchestrator | + input = "Ubuntu 24.04" 2026-03-30 00:02:27.288879 | orchestrator | + output = (known after apply) 2026-03-30 00:02:27.288886 | orchestrator | } 2026-03-30 00:02:27.288893 | orchestrator | 2026-03-30 00:02:27.288900 | orchestrator | # terraform_data.image_node will be created 2026-03-30 00:02:27.288911 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-30 00:02:27.288918 | orchestrator | + id = (known after apply) 2026-03-30 00:02:27.288925 | orchestrator | + input = "Ubuntu 24.04" 2026-03-30 00:02:27.288932 | orchestrator | + output = (known after apply) 2026-03-30 00:02:27.288939 | orchestrator | } 2026-03-30 00:02:27.288946 | orchestrator | 2026-03-30 00:02:27.288953 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-30 00:02:27.288960 | orchestrator | 2026-03-30 00:02:27.288967 | orchestrator | Changes to Outputs: 2026-03-30 00:02:27.288974 | orchestrator | + manager_address = (sensitive value) 2026-03-30 00:02:27.288981 | orchestrator | + private_key = (sensitive value) 2026-03-30 00:02:27.546380 | orchestrator | terraform_data.image: Creating... 2026-03-30 00:02:27.546440 | orchestrator | terraform_data.image_node: Creating... 2026-03-30 00:02:27.546448 | orchestrator | terraform_data.image: Creation complete after 0s [id=5ba75f05-a80e-7822-47e8-7c9009f6e5f2] 2026-03-30 00:02:27.547148 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=b38f4a86-532e-b849-9f5b-32ffdec813b6] 2026-03-30 00:02:27.561797 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-30 00:02:27.563741 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-30 00:02:27.572663 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-30 00:02:27.572713 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-30 00:02:27.572719 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-30 00:02:27.572723 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-30 00:02:27.572728 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-30 00:02:27.572732 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-30 00:02:27.572736 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-30 00:02:27.578994 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-30 00:02:28.011463 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-30 00:02:28.012074 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-30 00:02:28.020903 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-30 00:02:28.022041 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-30 00:02:28.072478 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-30 00:02:28.080608 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-30 00:02:29.060215 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=8197e25e-1ec1-49c5-a90d-32e8b6856f8f] 2026-03-30 00:02:29.073815 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-30 00:02:31.197131 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=06283a56-3f29-4145-9845-ba3e73029c57] 2026-03-30 00:02:31.202375 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=cc358305-34de-4116-8302-212671220cec] 2026-03-30 00:02:31.207340 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-30 00:02:31.211293 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-30 00:02:31.223481 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=11718c35-ee93-4e01-b68e-0ea3ca8f5a3f] 2026-03-30 00:02:31.227779 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-30 00:02:31.233996 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=73772ae7-f59b-43b9-ae4a-d5ef866e883c] 2026-03-30 00:02:31.245673 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=6acc619e-8818-4e1c-86d6-dab030db0f74] 2026-03-30 00:02:31.247204 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-30 00:02:31.250399 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=f4b6223c-7e5a-4bfd-b745-cff7b69b076a] 2026-03-30 00:02:31.250688 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-30 00:02:31.259796 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-30 00:02:31.300472 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=482d2c36-c609-4f47-a0c5-2f5f73693543] 2026-03-30 00:02:31.316591 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-30 00:02:31.319524 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=8036b2a3-a86f-46db-9367-e2397ecc6abf] 2026-03-30 00:02:31.321509 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=81d02518f9cd8ef149315925a83ae60f251a419d] 2026-03-30 00:02:31.321764 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=e10eeafd-2903-4790-b7e1-aa168837035a] 2026-03-30 00:02:31.328743 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-30 00:02:31.332335 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-30 00:02:31.333328 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=1c9e690fbde84a027d2cca477ea905570f1d086e] 2026-03-30 00:02:32.122692 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=73416f47-a429-4acd-897d-a649e4f3168e] 2026-03-30 00:02:32.130626 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-30 00:02:32.420437 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=2d0abad1-e3c2-4c21-a543-5ea974ffa3d0] 2026-03-30 00:02:34.610549 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=0349b975-80be-4625-9fdc-e308e57655f5] 2026-03-30 00:02:34.630611 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=1826e9b5-14e8-452e-be3c-21e3cc09cbbf] 2026-03-30 00:02:34.638972 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f58a6771-57f0-4c4e-a989-2765e31a048e] 2026-03-30 00:02:34.738326 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=000e91a4-99ec-4ebf-a015-8c98def25257] 2026-03-30 00:02:34.818332 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=453a1142-2b55-4cf7-822b-e97736e949f0] 2026-03-30 00:02:34.854560 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6e75c076-35fa-416f-b046-253c4346d0dc] 2026-03-30 00:02:36.667488 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=b4e34ed6-4b38-437f-a2ba-c2eeb483457d] 2026-03-30 00:02:36.680870 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-30 00:02:36.683430 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-30 00:02:36.683746 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-30 00:02:36.920416 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d235fa46-3a8c-4284-9114-90d94015d425] 2026-03-30 00:02:36.928112 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-30 00:02:36.930535 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-30 00:02:36.931258 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-30 00:02:36.931443 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-30 00:02:36.938484 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-30 00:02:36.938996 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-30 00:02:37.124302 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=c9795c8b-74dc-4edf-9f4b-f658f929ad8c] 2026-03-30 00:02:37.276977 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=386c5f31-f4da-44b3-9c90-30192ba9577d] 2026-03-30 00:02:37.489913 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d25dfb40-6449-45dd-8795-33486dfbd33f] 2026-03-30 00:02:37.859953 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=3e98b309-f094-4bf1-a1ab-25bd859e65bd] 2026-03-30 00:02:37.869653 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-30 00:02:37.877414 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-30 00:02:37.878552 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-30 00:02:37.883534 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-30 00:02:37.885144 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-30 00:02:37.888338 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-30 00:02:37.897765 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a2ce7dfe-83ec-4628-82fc-d16c50e75010] 2026-03-30 00:02:37.907521 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-30 00:02:37.929583 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=fd4b5611-9510-449f-b8f7-d6f2aee0660a] 2026-03-30 00:02:37.940331 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-30 00:02:38.075651 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=085610cf-af98-414d-aa4a-64b3f866d862] 2026-03-30 00:02:38.084302 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-30 00:02:38.599149 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=38934f7a-9e81-40bd-8ce7-cbd833a11f6a] 2026-03-30 00:02:38.614305 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-30 00:02:38.789931 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=ed0638d3-1f84-4a75-bb5a-10472f05e801] 2026-03-30 00:02:38.945660 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8b6cf66f-8de4-4e5d-b517-10a0f8cf9233] 2026-03-30 00:02:39.077031 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=436aac42-5261-465a-b138-22ed51c8fd47] 2026-03-30 00:02:39.106157 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=7b702425-637c-48e7-a73c-f8442caeb0b4] 2026-03-30 00:02:39.118226 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-30 00:02:39.216717 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=a091018f-e568-41b8-95d5-47ab5ce14d7a] 2026-03-30 00:02:39.312485 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=afa78edc-8b79-4a33-b6c7-a0d97534e4d0] 2026-03-30 00:02:39.484337 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=9eb0288d-fb50-4071-871b-0ff700d20df2] 2026-03-30 00:02:39.723109 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=dc808af2-ce53-49a8-b742-a6989c5a633a] 2026-03-30 00:02:39.811022 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=bea88e12-5a2d-476c-93ca-f1ce3231df8b] 2026-03-30 00:02:41.208885 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 3s [id=575a04ae-fbad-432d-9f15-1dce71819142] 2026-03-30 00:02:41.235710 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-30 00:02:41.236116 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-30 00:02:41.238461 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-30 00:02:41.249516 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-30 00:02:41.250890 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-30 00:02:41.258074 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-30 00:02:41.882005 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=5edd5a5f-5955-4b35-8efe-1797eb1bef1b] 2026-03-30 00:02:41.888659 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-30 00:02:41.898994 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-30 00:02:41.902552 | orchestrator | local_file.inventory: Creating... 2026-03-30 00:02:41.903628 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=07ad62fa47dbd0714141c98b64c4e809d8c1a21a] 2026-03-30 00:02:41.907075 | orchestrator | local_file.inventory: Creation complete after 0s [id=9d28a41c4091f333c5df28f7c4d351893a370dd8] 2026-03-30 00:02:43.009479 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5edd5a5f-5955-4b35-8efe-1797eb1bef1b] 2026-03-30 00:02:51.239722 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-30 00:02:51.239861 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-30 00:02:51.240882 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-30 00:02:51.255166 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-30 00:02:51.256360 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-30 00:02:51.258631 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-30 00:03:01.239937 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-30 00:03:01.240086 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-30 00:03:01.241300 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-30 00:03:01.255868 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-30 00:03:01.256909 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-30 00:03:01.259211 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-30 00:03:11.248498 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-30 00:03:11.248662 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-30 00:03:11.248682 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-30 00:03:11.256914 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-30 00:03:11.257149 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-30 00:03:11.259392 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-30 00:03:21.257265 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-30 00:03:21.257371 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-30 00:03:21.257383 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-30 00:03:21.257401 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-30 00:03:21.257409 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-30 00:03:21.260629 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-30 00:03:31.266567 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-30 00:03:31.266676 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-30 00:03:31.266685 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-30 00:03:31.266699 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-30 00:03:31.266704 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-30 00:03:31.266709 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-30 00:03:32.011042 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=344081ae-b592-4302-91c9-1cc13ba88f75] 2026-03-30 00:03:41.275581 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-30 00:03:41.275672 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-03-30 00:03:41.275679 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-30 00:03:41.275691 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-03-30 00:03:41.275696 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-30 00:03:42.265888 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=d538221c-fd8c-4031-a50b-016e53cdd32e] 2026-03-30 00:03:42.776910 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m2s [id=c7653798-b8e7-4a8a-bd2a-177a65209d7d] 2026-03-30 00:03:42.806291 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m2s [id=e5a72853-f387-4b17-bfdb-bcbc7cacf66a] 2026-03-30 00:03:42.993834 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m2s [id=2d5fe969-a568-42f0-933c-73164a523186] 2026-03-30 00:03:51.284658 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-03-30 00:03:52.876918 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m12s [id=a8f596b7-d6a2-43c1-95d6-85317001a37f] 2026-03-30 00:03:52.905829 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-30 00:03:52.952687 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-30 00:03:52.967384 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8394492647484648184] 2026-03-30 00:03:52.981682 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-30 00:03:52.994087 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-30 00:03:52.995477 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-30 00:03:52.997339 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-30 00:03:53.006228 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-30 00:03:53.006310 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-30 00:03:53.030630 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-30 00:03:53.031498 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-30 00:03:53.031901 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-30 00:03:56.356941 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=344081ae-b592-4302-91c9-1cc13ba88f75/06283a56-3f29-4145-9845-ba3e73029c57] 2026-03-30 00:03:56.405505 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=344081ae-b592-4302-91c9-1cc13ba88f75/6acc619e-8818-4e1c-86d6-dab030db0f74] 2026-03-30 00:03:56.518102 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=c7653798-b8e7-4a8a-bd2a-177a65209d7d/f4b6223c-7e5a-4bfd-b745-cff7b69b076a] 2026-03-30 00:03:56.534367 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=e5a72853-f387-4b17-bfdb-bcbc7cacf66a/11718c35-ee93-4e01-b68e-0ea3ca8f5a3f] 2026-03-30 00:03:56.558976 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=e5a72853-f387-4b17-bfdb-bcbc7cacf66a/8036b2a3-a86f-46db-9367-e2397ecc6abf] 2026-03-30 00:03:56.583825 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=c7653798-b8e7-4a8a-bd2a-177a65209d7d/cc358305-34de-4116-8302-212671220cec] 2026-03-30 00:04:02.614815 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=344081ae-b592-4302-91c9-1cc13ba88f75/73772ae7-f59b-43b9-ae4a-d5ef866e883c] 2026-03-30 00:04:02.668806 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=e5a72853-f387-4b17-bfdb-bcbc7cacf66a/482d2c36-c609-4f47-a0c5-2f5f73693543] 2026-03-30 00:04:02.762279 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=c7653798-b8e7-4a8a-bd2a-177a65209d7d/e10eeafd-2903-4790-b7e1-aa168837035a] 2026-03-30 00:04:03.033036 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-30 00:04:13.042359 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-30 00:04:13.591745 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=eaecd79c-e44d-49af-826e-caab3165681c] 2026-03-30 00:04:13.613549 | orchestrator | 2026-03-30 00:04:13.613626 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-30 00:04:13.613636 | orchestrator | 2026-03-30 00:04:13.613643 | orchestrator | Outputs: 2026-03-30 00:04:13.613650 | orchestrator | 2026-03-30 00:04:13.613656 | orchestrator | manager_address = 2026-03-30 00:04:13.613663 | orchestrator | private_key = 2026-03-30 00:04:13.994876 | orchestrator | ok: Runtime: 0:01:58.744189 2026-03-30 00:04:14.031157 | 2026-03-30 00:04:14.031507 | TASK [Create infrastructure (stable)] 2026-03-30 00:04:14.566520 | orchestrator | skipping: Conditional result was False 2026-03-30 00:04:14.590416 | 2026-03-30 00:04:14.590604 | TASK [Fetch manager address] 2026-03-30 00:04:15.081524 | orchestrator | ok 2026-03-30 00:04:15.088769 | 2026-03-30 00:04:15.088886 | TASK [Set manager_host address] 2026-03-30 00:04:15.168411 | orchestrator | ok 2026-03-30 00:04:15.178413 | 2026-03-30 00:04:15.178577 | LOOP [Update ansible collections] 2026-03-30 00:04:16.278120 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-30 00:04:16.278519 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-30 00:04:16.278582 | orchestrator | Starting galaxy collection install process 2026-03-30 00:04:16.278623 | orchestrator | Process install dependency map 2026-03-30 00:04:16.278659 | orchestrator | Starting collection install process 2026-03-30 00:04:16.278693 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-03-30 00:04:16.278735 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-03-30 00:04:16.278787 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-30 00:04:16.278896 | orchestrator | ok: Item: commons Runtime: 0:00:00.758970 2026-03-30 00:04:17.518637 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-30 00:04:17.518993 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-30 00:04:17.519080 | orchestrator | Starting galaxy collection install process 2026-03-30 00:04:17.519132 | orchestrator | Process install dependency map 2026-03-30 00:04:17.519177 | orchestrator | Starting collection install process 2026-03-30 00:04:17.519217 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-03-30 00:04:17.519253 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-03-30 00:04:17.519308 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-30 00:04:17.519368 | orchestrator | ok: Item: services Runtime: 0:00:00.924135 2026-03-30 00:04:17.546166 | 2026-03-30 00:04:17.546380 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-30 00:04:28.129825 | orchestrator | ok 2026-03-30 00:04:28.142631 | 2026-03-30 00:04:28.142761 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-30 00:05:28.178032 | orchestrator | ok 2026-03-30 00:05:28.187021 | 2026-03-30 00:05:28.187149 | TASK [Fetch manager ssh hostkey] 2026-03-30 00:05:29.767614 | orchestrator | Output suppressed because no_log was given 2026-03-30 00:05:29.774985 | 2026-03-30 00:05:29.775114 | TASK [Get ssh keypair from terraform environment] 2026-03-30 00:05:30.308431 | orchestrator | ok: Runtime: 0:00:00.006273 2026-03-30 00:05:30.323418 | 2026-03-30 00:05:30.323578 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-30 00:05:30.373593 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-30 00:05:30.383524 | 2026-03-30 00:05:30.383650 | TASK [Run manager part 0] 2026-03-30 00:05:31.627974 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-30 00:05:31.697953 | orchestrator | 2026-03-30 00:05:31.698065 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-30 00:05:31.698080 | orchestrator | 2026-03-30 00:05:31.698103 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-30 00:05:33.463445 | orchestrator | ok: [testbed-manager] 2026-03-30 00:05:33.463574 | orchestrator | 2026-03-30 00:05:33.463605 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-30 00:05:33.463619 | orchestrator | 2026-03-30 00:05:33.463632 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:05:35.327244 | orchestrator | ok: [testbed-manager] 2026-03-30 00:05:35.327375 | orchestrator | 2026-03-30 00:05:35.327399 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-30 00:05:36.018962 | orchestrator | ok: [testbed-manager] 2026-03-30 00:05:36.019021 | orchestrator | 2026-03-30 00:05:36.019033 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-30 00:05:36.061144 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:05:36.061192 | orchestrator | 2026-03-30 00:05:36.061200 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-30 00:05:36.096739 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:05:36.096801 | orchestrator | 2026-03-30 00:05:36.096813 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-30 00:05:36.134934 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:05:36.135007 | orchestrator | 2026-03-30 00:05:36.135017 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-30 00:05:36.823278 | orchestrator | changed: [testbed-manager] 2026-03-30 00:05:36.823404 | orchestrator | 2026-03-30 00:05:36.823416 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-30 00:08:24.999565 | orchestrator | changed: [testbed-manager] 2026-03-30 00:08:24.999678 | orchestrator | 2026-03-30 00:08:24.999698 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-30 00:09:36.501867 | orchestrator | changed: [testbed-manager] 2026-03-30 00:09:36.501951 | orchestrator | 2026-03-30 00:09:36.501988 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-30 00:09:58.196977 | orchestrator | changed: [testbed-manager] 2026-03-30 00:09:58.197204 | orchestrator | 2026-03-30 00:09:58.197231 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-30 00:10:09.191854 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:09.191956 | orchestrator | 2026-03-30 00:10:09.191972 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-30 00:10:09.237171 | orchestrator | ok: [testbed-manager] 2026-03-30 00:10:09.237229 | orchestrator | 2026-03-30 00:10:09.237244 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-30 00:10:10.051421 | orchestrator | ok: [testbed-manager] 2026-03-30 00:10:10.051488 | orchestrator | 2026-03-30 00:10:10.051500 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-30 00:10:10.739418 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:10.739491 | orchestrator | 2026-03-30 00:10:10.739509 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-30 00:10:16.927199 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:16.927300 | orchestrator | 2026-03-30 00:10:16.927318 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-30 00:10:22.745271 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:22.745450 | orchestrator | 2026-03-30 00:10:22.745467 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-30 00:10:25.384532 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:25.384570 | orchestrator | 2026-03-30 00:10:25.384576 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-30 00:10:27.063995 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:27.064031 | orchestrator | 2026-03-30 00:10:27.064037 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-30 00:10:28.081670 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-30 00:10:28.081745 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-30 00:10:28.081752 | orchestrator | 2026-03-30 00:10:28.081758 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-30 00:10:28.119885 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-30 00:10:28.119940 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-30 00:10:28.119949 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-30 00:10:28.119958 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-30 00:10:31.342927 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-30 00:10:31.342967 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-30 00:10:31.342972 | orchestrator | 2026-03-30 00:10:31.342977 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-30 00:10:31.846689 | orchestrator | changed: [testbed-manager] 2026-03-30 00:10:31.846722 | orchestrator | 2026-03-30 00:10:31.846727 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-30 00:11:59.464796 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-30 00:11:59.465110 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-30 00:11:59.465146 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-30 00:11:59.465157 | orchestrator | 2026-03-30 00:11:59.465167 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-30 00:12:02.182175 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-30 00:12:02.182262 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-30 00:12:02.182277 | orchestrator | 2026-03-30 00:12:02.182292 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-30 00:12:02.182304 | orchestrator | 2026-03-30 00:12:02.182315 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:12:03.667820 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:03.667917 | orchestrator | 2026-03-30 00:12:03.667933 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-30 00:12:03.722793 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:03.722880 | orchestrator | 2026-03-30 00:12:03.722896 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-30 00:12:03.800776 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:03.800864 | orchestrator | 2026-03-30 00:12:03.800878 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-30 00:12:04.594267 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:04.594657 | orchestrator | 2026-03-30 00:12:04.594682 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-30 00:12:05.365534 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:05.365642 | orchestrator | 2026-03-30 00:12:05.365658 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-30 00:12:06.830802 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-30 00:12:06.830872 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-30 00:12:06.830886 | orchestrator | 2026-03-30 00:12:06.830900 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-30 00:12:08.228067 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:08.228174 | orchestrator | 2026-03-30 00:12:08.228192 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-30 00:12:10.031514 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:12:10.031586 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-30 00:12:10.031604 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:12:10.031611 | orchestrator | 2026-03-30 00:12:10.031619 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-30 00:12:10.091067 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:10.091115 | orchestrator | 2026-03-30 00:12:10.091125 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-30 00:12:10.169729 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:10.169771 | orchestrator | 2026-03-30 00:12:10.169779 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-30 00:12:10.768871 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:10.768914 | orchestrator | 2026-03-30 00:12:10.768922 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-30 00:12:10.826167 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:10.826259 | orchestrator | 2026-03-30 00:12:10.826279 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-30 00:12:11.773620 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-30 00:12:11.773721 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:11.773738 | orchestrator | 2026-03-30 00:12:11.773750 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-30 00:12:11.812466 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:11.812599 | orchestrator | 2026-03-30 00:12:11.812618 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-30 00:12:11.848948 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:11.849035 | orchestrator | 2026-03-30 00:12:11.849051 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-30 00:12:11.890777 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:11.890889 | orchestrator | 2026-03-30 00:12:11.890914 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-30 00:12:11.974240 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:11.974310 | orchestrator | 2026-03-30 00:12:11.974317 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-30 00:12:12.744414 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:12.744450 | orchestrator | 2026-03-30 00:12:12.744456 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-30 00:12:12.744461 | orchestrator | 2026-03-30 00:12:12.744467 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:12:14.136834 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:14.136942 | orchestrator | 2026-03-30 00:12:14.136958 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-30 00:12:15.142109 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:15.142936 | orchestrator | 2026-03-30 00:12:15.142975 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:12:15.142994 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-30 00:12:15.143008 | orchestrator | 2026-03-30 00:12:15.671595 | orchestrator | ok: Runtime: 0:06:44.478427 2026-03-30 00:12:15.692000 | 2026-03-30 00:12:15.692146 | TASK [Point out that the log in on the manager is now possible] 2026-03-30 00:12:15.739411 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-30 00:12:15.748914 | 2026-03-30 00:12:15.749032 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-30 00:12:15.785889 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-30 00:12:15.795005 | 2026-03-30 00:12:15.795127 | TASK [Run manager part 1 + 2] 2026-03-30 00:12:16.693751 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-30 00:12:16.751399 | orchestrator | 2026-03-30 00:12:16.751500 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-30 00:12:16.751526 | orchestrator | 2026-03-30 00:12:16.751586 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:12:19.717241 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:19.717308 | orchestrator | 2026-03-30 00:12:19.717348 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-30 00:12:19.764462 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:19.764527 | orchestrator | 2026-03-30 00:12:19.764563 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-30 00:12:19.823263 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:19.823345 | orchestrator | 2026-03-30 00:12:19.823361 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-30 00:12:19.874167 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:19.874226 | orchestrator | 2026-03-30 00:12:19.874236 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-30 00:12:19.945797 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:19.945895 | orchestrator | 2026-03-30 00:12:19.945906 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-30 00:12:20.005280 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:20.005332 | orchestrator | 2026-03-30 00:12:20.005341 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-30 00:12:20.043974 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-30 00:12:20.044065 | orchestrator | 2026-03-30 00:12:20.044082 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-30 00:12:20.796723 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:20.796805 | orchestrator | 2026-03-30 00:12:20.796820 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-30 00:12:20.849743 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:20.849804 | orchestrator | 2026-03-30 00:12:20.849812 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-30 00:12:22.289470 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:22.289599 | orchestrator | 2026-03-30 00:12:22.289614 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-30 00:12:22.874933 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:22.875027 | orchestrator | 2026-03-30 00:12:22.875045 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-30 00:12:24.061649 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:24.061745 | orchestrator | 2026-03-30 00:12:24.061765 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-30 00:12:41.262870 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:41.263067 | orchestrator | 2026-03-30 00:12:41.263081 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-30 00:12:41.954689 | orchestrator | ok: [testbed-manager] 2026-03-30 00:12:41.954778 | orchestrator | 2026-03-30 00:12:41.954796 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-30 00:12:42.009981 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:42.010115 | orchestrator | 2026-03-30 00:12:42.010143 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-30 00:12:42.992372 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:42.992458 | orchestrator | 2026-03-30 00:12:42.992472 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-30 00:12:43.996807 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:43.996927 | orchestrator | 2026-03-30 00:12:43.996946 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-30 00:12:44.586471 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:44.586571 | orchestrator | 2026-03-30 00:12:44.586587 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-30 00:12:44.627984 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-30 00:12:44.628113 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-30 00:12:44.628131 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-30 00:12:44.628143 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-30 00:12:46.613777 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:46.613825 | orchestrator | 2026-03-30 00:12:46.613834 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-30 00:12:55.383595 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-30 00:12:55.383631 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-30 00:12:55.383637 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-30 00:12:55.383642 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-30 00:12:55.383650 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-30 00:12:55.383654 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-30 00:12:55.383658 | orchestrator | 2026-03-30 00:12:55.383663 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-30 00:12:56.370240 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:56.370278 | orchestrator | 2026-03-30 00:12:56.370284 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-30 00:12:59.390207 | orchestrator | changed: [testbed-manager] 2026-03-30 00:12:59.390253 | orchestrator | 2026-03-30 00:12:59.390263 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-30 00:12:59.425335 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:12:59.425374 | orchestrator | 2026-03-30 00:12:59.425381 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-30 00:14:39.684053 | orchestrator | changed: [testbed-manager] 2026-03-30 00:14:39.684088 | orchestrator | 2026-03-30 00:14:39.684094 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-30 00:14:40.849512 | orchestrator | ok: [testbed-manager] 2026-03-30 00:14:40.849597 | orchestrator | 2026-03-30 00:14:40.849616 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:14:40.849631 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-30 00:14:40.849644 | orchestrator | 2026-03-30 00:14:41.421128 | orchestrator | ok: Runtime: 0:02:24.848857 2026-03-30 00:14:41.437503 | 2026-03-30 00:14:41.437680 | TASK [Reboot manager] 2026-03-30 00:14:42.980627 | orchestrator | ok: Runtime: 0:00:00.945927 2026-03-30 00:14:42.998587 | 2026-03-30 00:14:42.998805 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-30 00:14:56.951064 | orchestrator | ok 2026-03-30 00:14:56.958429 | 2026-03-30 00:14:56.958536 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-30 00:15:56.996844 | orchestrator | ok 2026-03-30 00:15:57.005223 | 2026-03-30 00:15:57.005343 | TASK [Deploy manager + bootstrap nodes] 2026-03-30 00:15:59.435596 | orchestrator | 2026-03-30 00:15:59.435851 | orchestrator | # DEPLOY MANAGER 2026-03-30 00:15:59.435879 | orchestrator | 2026-03-30 00:15:59.435894 | orchestrator | + set -e 2026-03-30 00:15:59.435908 | orchestrator | + echo 2026-03-30 00:15:59.435923 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-30 00:15:59.435941 | orchestrator | + echo 2026-03-30 00:15:59.435990 | orchestrator | + cat /opt/manager-vars.sh 2026-03-30 00:15:59.438447 | orchestrator | export NUMBER_OF_NODES=6 2026-03-30 00:15:59.438481 | orchestrator | 2026-03-30 00:15:59.438494 | orchestrator | export CEPH_VERSION=reef 2026-03-30 00:15:59.438508 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-30 00:15:59.438521 | orchestrator | export MANAGER_VERSION=latest 2026-03-30 00:15:59.438545 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-30 00:15:59.438556 | orchestrator | 2026-03-30 00:15:59.438575 | orchestrator | export ARA=false 2026-03-30 00:15:59.438587 | orchestrator | export DEPLOY_MODE=manager 2026-03-30 00:15:59.438605 | orchestrator | export TEMPEST=true 2026-03-30 00:15:59.438617 | orchestrator | export IS_ZUUL=true 2026-03-30 00:15:59.438628 | orchestrator | 2026-03-30 00:15:59.438647 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:15:59.438659 | orchestrator | export EXTERNAL_API=false 2026-03-30 00:15:59.438671 | orchestrator | 2026-03-30 00:15:59.438681 | orchestrator | export IMAGE_USER=ubuntu 2026-03-30 00:15:59.438698 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-30 00:15:59.438708 | orchestrator | 2026-03-30 00:15:59.438719 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-30 00:15:59.438736 | orchestrator | 2026-03-30 00:15:59.438748 | orchestrator | + echo 2026-03-30 00:15:59.438761 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-30 00:15:59.439251 | orchestrator | ++ export INTERACTIVE=false 2026-03-30 00:15:59.439269 | orchestrator | ++ INTERACTIVE=false 2026-03-30 00:15:59.439396 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-30 00:15:59.439412 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-30 00:15:59.439528 | orchestrator | + source /opt/manager-vars.sh 2026-03-30 00:15:59.439544 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-30 00:15:59.439555 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-30 00:15:59.439566 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-30 00:15:59.439624 | orchestrator | ++ CEPH_VERSION=reef 2026-03-30 00:15:59.439638 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-30 00:15:59.439649 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-30 00:15:59.439861 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 00:15:59.439876 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 00:15:59.439887 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-30 00:15:59.439906 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-30 00:15:59.440064 | orchestrator | ++ export ARA=false 2026-03-30 00:15:59.440080 | orchestrator | ++ ARA=false 2026-03-30 00:15:59.440091 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-30 00:15:59.440102 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-30 00:15:59.440112 | orchestrator | ++ export TEMPEST=true 2026-03-30 00:15:59.440123 | orchestrator | ++ TEMPEST=true 2026-03-30 00:15:59.440134 | orchestrator | ++ export IS_ZUUL=true 2026-03-30 00:15:59.440145 | orchestrator | ++ IS_ZUUL=true 2026-03-30 00:15:59.440256 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:15:59.440270 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:15:59.440281 | orchestrator | ++ export EXTERNAL_API=false 2026-03-30 00:15:59.440292 | orchestrator | ++ EXTERNAL_API=false 2026-03-30 00:15:59.440303 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-30 00:15:59.440314 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-30 00:15:59.440324 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-30 00:15:59.440335 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-30 00:15:59.440347 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-30 00:15:59.440358 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-30 00:15:59.440369 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-30 00:15:59.493881 | orchestrator | + docker version 2026-03-30 00:15:59.587153 | orchestrator | Client: Docker Engine - Community 2026-03-30 00:15:59.587340 | orchestrator | Version: 27.5.1 2026-03-30 00:15:59.587371 | orchestrator | API version: 1.47 2026-03-30 00:15:59.587395 | orchestrator | Go version: go1.22.11 2026-03-30 00:15:59.587414 | orchestrator | Git commit: 9f9e405 2026-03-30 00:15:59.587432 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-30 00:15:59.587452 | orchestrator | OS/Arch: linux/amd64 2026-03-30 00:15:59.587471 | orchestrator | Context: default 2026-03-30 00:15:59.587490 | orchestrator | 2026-03-30 00:15:59.587509 | orchestrator | Server: Docker Engine - Community 2026-03-30 00:15:59.587528 | orchestrator | Engine: 2026-03-30 00:15:59.587546 | orchestrator | Version: 27.5.1 2026-03-30 00:15:59.587566 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-30 00:15:59.587653 | orchestrator | Go version: go1.22.11 2026-03-30 00:15:59.587674 | orchestrator | Git commit: 4c9b3b0 2026-03-30 00:15:59.587687 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-30 00:15:59.587697 | orchestrator | OS/Arch: linux/amd64 2026-03-30 00:15:59.587708 | orchestrator | Experimental: false 2026-03-30 00:15:59.587720 | orchestrator | containerd: 2026-03-30 00:15:59.587731 | orchestrator | Version: v2.2.2 2026-03-30 00:15:59.587742 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-30 00:15:59.587755 | orchestrator | runc: 2026-03-30 00:15:59.587766 | orchestrator | Version: 1.3.4 2026-03-30 00:15:59.587777 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-30 00:15:59.587789 | orchestrator | docker-init: 2026-03-30 00:15:59.587800 | orchestrator | Version: 0.19.0 2026-03-30 00:15:59.587811 | orchestrator | GitCommit: de40ad0 2026-03-30 00:15:59.589755 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-30 00:15:59.598561 | orchestrator | + set -e 2026-03-30 00:15:59.598617 | orchestrator | + source /opt/manager-vars.sh 2026-03-30 00:15:59.598632 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-30 00:15:59.598645 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-30 00:15:59.598656 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-30 00:15:59.598668 | orchestrator | ++ CEPH_VERSION=reef 2026-03-30 00:15:59.598679 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-30 00:15:59.599583 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-30 00:15:59.599605 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 00:15:59.599624 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 00:15:59.599644 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-30 00:15:59.599663 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-30 00:15:59.599681 | orchestrator | ++ export ARA=false 2026-03-30 00:15:59.599700 | orchestrator | ++ ARA=false 2026-03-30 00:15:59.599719 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-30 00:15:59.599739 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-30 00:15:59.599757 | orchestrator | ++ export TEMPEST=true 2026-03-30 00:15:59.599777 | orchestrator | ++ TEMPEST=true 2026-03-30 00:15:59.599791 | orchestrator | ++ export IS_ZUUL=true 2026-03-30 00:15:59.599802 | orchestrator | ++ IS_ZUUL=true 2026-03-30 00:15:59.599814 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:15:59.599825 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:15:59.599836 | orchestrator | ++ export EXTERNAL_API=false 2026-03-30 00:15:59.599847 | orchestrator | ++ EXTERNAL_API=false 2026-03-30 00:15:59.599857 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-30 00:15:59.599868 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-30 00:15:59.599879 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-30 00:15:59.599890 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-30 00:15:59.599901 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-30 00:15:59.599912 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-30 00:15:59.599923 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-30 00:15:59.599934 | orchestrator | ++ export INTERACTIVE=false 2026-03-30 00:15:59.599944 | orchestrator | ++ INTERACTIVE=false 2026-03-30 00:15:59.599955 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-30 00:15:59.599972 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-30 00:15:59.599983 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 00:15:59.599993 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 00:15:59.600004 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-30 00:15:59.606461 | orchestrator | + set -e 2026-03-30 00:15:59.606509 | orchestrator | + VERSION=reef 2026-03-30 00:15:59.607398 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-30 00:15:59.612992 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-30 00:15:59.613034 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-30 00:15:59.619983 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-30 00:15:59.637753 | orchestrator | + set -e 2026-03-30 00:15:59.637824 | orchestrator | + VERSION=2024.2 2026-03-30 00:15:59.638894 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-30 00:15:59.641781 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-30 00:15:59.641825 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-30 00:15:59.647149 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-30 00:15:59.648045 | orchestrator | ++ semver latest 7.0.0 2026-03-30 00:15:59.714289 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 00:15:59.714421 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 00:15:59.714439 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-30 00:15:59.714467 | orchestrator | ++ semver latest 10.0.0-0 2026-03-30 00:15:59.775289 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 00:15:59.775448 | orchestrator | ++ semver 2024.2 2025.1 2026-03-30 00:15:59.839399 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 00:15:59.839527 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-30 00:15:59.931334 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-30 00:15:59.932288 | orchestrator | + source /opt/venv/bin/activate 2026-03-30 00:15:59.933260 | orchestrator | ++ deactivate nondestructive 2026-03-30 00:15:59.933290 | orchestrator | ++ '[' -n '' ']' 2026-03-30 00:15:59.933303 | orchestrator | ++ '[' -n '' ']' 2026-03-30 00:15:59.933318 | orchestrator | ++ hash -r 2026-03-30 00:15:59.933423 | orchestrator | ++ '[' -n '' ']' 2026-03-30 00:15:59.933439 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-30 00:15:59.933512 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-30 00:15:59.933530 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-30 00:15:59.933637 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-30 00:15:59.933653 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-30 00:15:59.933664 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-30 00:15:59.933675 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-30 00:15:59.933687 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-30 00:15:59.933704 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-30 00:15:59.933715 | orchestrator | ++ export PATH 2026-03-30 00:15:59.933859 | orchestrator | ++ '[' -n '' ']' 2026-03-30 00:15:59.933876 | orchestrator | ++ '[' -z '' ']' 2026-03-30 00:15:59.933887 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-30 00:15:59.933899 | orchestrator | ++ PS1='(venv) ' 2026-03-30 00:15:59.933910 | orchestrator | ++ export PS1 2026-03-30 00:15:59.933921 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-30 00:15:59.933937 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-30 00:15:59.933949 | orchestrator | ++ hash -r 2026-03-30 00:15:59.934438 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-30 00:16:01.099830 | orchestrator | 2026-03-30 00:16:01.099946 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-30 00:16:01.099964 | orchestrator | 2026-03-30 00:16:01.099976 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-30 00:16:01.645117 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:01.645272 | orchestrator | 2026-03-30 00:16:01.645293 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-30 00:16:02.580850 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:02.580949 | orchestrator | 2026-03-30 00:16:02.580967 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-30 00:16:02.580980 | orchestrator | 2026-03-30 00:16:02.580991 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:16:04.799135 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:04.799233 | orchestrator | 2026-03-30 00:16:04.799242 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-30 00:16:04.851966 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:04.852094 | orchestrator | 2026-03-30 00:16:04.852125 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-30 00:16:05.259759 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:05.259875 | orchestrator | 2026-03-30 00:16:05.259894 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-30 00:16:05.289718 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:05.289805 | orchestrator | 2026-03-30 00:16:05.289819 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-30 00:16:05.599171 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:05.599284 | orchestrator | 2026-03-30 00:16:05.599301 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-30 00:16:05.900683 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:05.900782 | orchestrator | 2026-03-30 00:16:05.900798 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-30 00:16:06.002436 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:06.002530 | orchestrator | 2026-03-30 00:16:06.002546 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-30 00:16:06.002638 | orchestrator | 2026-03-30 00:16:06.002650 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:16:07.611937 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:07.612048 | orchestrator | 2026-03-30 00:16:07.612067 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-30 00:16:07.707934 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-30 00:16:07.708009 | orchestrator | 2026-03-30 00:16:07.708017 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-30 00:16:07.759846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-30 00:16:07.759946 | orchestrator | 2026-03-30 00:16:07.759963 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-30 00:16:08.754925 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-30 00:16:08.755011 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-30 00:16:08.755022 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-30 00:16:08.755031 | orchestrator | 2026-03-30 00:16:08.755040 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-30 00:16:10.473382 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-30 00:16:10.473502 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-30 00:16:10.473518 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-30 00:16:10.474335 | orchestrator | 2026-03-30 00:16:10.474360 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-30 00:16:11.103678 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-30 00:16:11.103807 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:11.103825 | orchestrator | 2026-03-30 00:16:11.103839 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-30 00:16:11.720505 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-30 00:16:11.720600 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:11.720618 | orchestrator | 2026-03-30 00:16:11.720631 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-30 00:16:11.777274 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:11.777374 | orchestrator | 2026-03-30 00:16:11.777391 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-30 00:16:12.116409 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:12.116503 | orchestrator | 2026-03-30 00:16:12.116520 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-30 00:16:12.190777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-30 00:16:12.190872 | orchestrator | 2026-03-30 00:16:12.190887 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-30 00:16:13.252605 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:13.252705 | orchestrator | 2026-03-30 00:16:13.252721 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-30 00:16:14.007924 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:14.008021 | orchestrator | 2026-03-30 00:16:14.008042 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-30 00:16:29.028863 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:29.029001 | orchestrator | 2026-03-30 00:16:29.029041 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-30 00:16:29.077187 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:29.077295 | orchestrator | 2026-03-30 00:16:29.077310 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-30 00:16:29.077322 | orchestrator | 2026-03-30 00:16:29.077333 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:16:30.810512 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:30.810606 | orchestrator | 2026-03-30 00:16:30.810652 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-30 00:16:30.930415 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-30 00:16:30.930599 | orchestrator | 2026-03-30 00:16:30.930632 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-30 00:16:30.985917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-30 00:16:30.985979 | orchestrator | 2026-03-30 00:16:30.985987 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-30 00:16:33.306089 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:33.306176 | orchestrator | 2026-03-30 00:16:33.306186 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-30 00:16:33.360463 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:33.360548 | orchestrator | 2026-03-30 00:16:33.360561 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-30 00:16:33.487242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-30 00:16:33.487344 | orchestrator | 2026-03-30 00:16:33.487364 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-30 00:16:36.232462 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-30 00:16:36.852424 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-30 00:16:36.852544 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-30 00:16:36.852570 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-30 00:16:36.852590 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-30 00:16:36.852609 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-30 00:16:36.852628 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-30 00:16:36.852647 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-30 00:16:36.852666 | orchestrator | 2026-03-30 00:16:36.852685 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-30 00:16:36.882128 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:36.882296 | orchestrator | 2026-03-30 00:16:36.882325 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-30 00:16:37.491085 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:37.491180 | orchestrator | 2026-03-30 00:16:37.491239 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-30 00:16:37.573745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-30 00:16:37.573832 | orchestrator | 2026-03-30 00:16:37.573846 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-30 00:16:38.795727 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-30 00:16:38.795886 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-30 00:16:38.795908 | orchestrator | 2026-03-30 00:16:38.795922 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-30 00:16:39.405823 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:39.405950 | orchestrator | 2026-03-30 00:16:39.405980 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-30 00:16:39.464812 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:39.464920 | orchestrator | 2026-03-30 00:16:39.464937 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-30 00:16:39.536821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-30 00:16:39.536910 | orchestrator | 2026-03-30 00:16:39.536923 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-30 00:16:40.148878 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:40.148979 | orchestrator | 2026-03-30 00:16:40.148996 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-30 00:16:40.208777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-30 00:16:40.208922 | orchestrator | 2026-03-30 00:16:40.208941 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-30 00:16:41.527291 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-30 00:16:41.527440 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-30 00:16:41.527484 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:41.527498 | orchestrator | 2026-03-30 00:16:41.527509 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-30 00:16:42.133285 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:42.133387 | orchestrator | 2026-03-30 00:16:42.133407 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-30 00:16:42.186209 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:42.186301 | orchestrator | 2026-03-30 00:16:42.186319 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-30 00:16:42.275694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-30 00:16:42.275791 | orchestrator | 2026-03-30 00:16:42.275808 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-30 00:16:42.783314 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:42.783412 | orchestrator | 2026-03-30 00:16:42.783452 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-30 00:16:43.194623 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:43.194737 | orchestrator | 2026-03-30 00:16:43.194763 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-30 00:16:44.375961 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-30 00:16:44.376072 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-30 00:16:44.376089 | orchestrator | 2026-03-30 00:16:44.376109 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-30 00:16:45.023477 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:45.023600 | orchestrator | 2026-03-30 00:16:45.023628 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-30 00:16:45.392850 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:45.392949 | orchestrator | 2026-03-30 00:16:45.392965 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-30 00:16:45.740718 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:45.740825 | orchestrator | 2026-03-30 00:16:45.740841 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-30 00:16:45.778209 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:45.778312 | orchestrator | 2026-03-30 00:16:45.778330 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-30 00:16:45.848328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-30 00:16:45.848424 | orchestrator | 2026-03-30 00:16:45.848438 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-30 00:16:45.898150 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:45.898281 | orchestrator | 2026-03-30 00:16:45.898333 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-30 00:16:47.870940 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-30 00:16:47.871072 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-30 00:16:47.871100 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-30 00:16:47.871124 | orchestrator | 2026-03-30 00:16:47.871147 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-30 00:16:48.576091 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:48.576270 | orchestrator | 2026-03-30 00:16:48.576299 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-30 00:16:49.264750 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:49.264845 | orchestrator | 2026-03-30 00:16:49.264861 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-30 00:16:49.962263 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:49.962370 | orchestrator | 2026-03-30 00:16:49.962390 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-30 00:16:50.036356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-30 00:16:50.036445 | orchestrator | 2026-03-30 00:16:50.036460 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-30 00:16:50.090076 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:50.090153 | orchestrator | 2026-03-30 00:16:50.090167 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-30 00:16:50.765863 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-30 00:16:50.765926 | orchestrator | 2026-03-30 00:16:50.765933 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-30 00:16:50.849652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-30 00:16:50.849721 | orchestrator | 2026-03-30 00:16:50.849731 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-30 00:16:51.547979 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:51.548132 | orchestrator | 2026-03-30 00:16:51.548162 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-30 00:16:52.179638 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:52.180795 | orchestrator | 2026-03-30 00:16:52.180837 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-30 00:16:52.221114 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:16:52.221171 | orchestrator | 2026-03-30 00:16:52.221190 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-30 00:16:52.271043 | orchestrator | ok: [testbed-manager] 2026-03-30 00:16:52.271124 | orchestrator | 2026-03-30 00:16:52.271138 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-30 00:16:53.052857 | orchestrator | changed: [testbed-manager] 2026-03-30 00:16:53.052986 | orchestrator | 2026-03-30 00:16:53.053003 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-30 00:18:01.220081 | orchestrator | changed: [testbed-manager] 2026-03-30 00:18:01.220261 | orchestrator | 2026-03-30 00:18:01.220287 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-30 00:18:02.142575 | orchestrator | ok: [testbed-manager] 2026-03-30 00:18:02.142669 | orchestrator | 2026-03-30 00:18:02.142687 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-30 00:18:02.200555 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:18:02.200633 | orchestrator | 2026-03-30 00:18:02.200648 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-30 00:18:04.949512 | orchestrator | changed: [testbed-manager] 2026-03-30 00:18:04.949590 | orchestrator | 2026-03-30 00:18:04.949599 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-30 00:18:05.060813 | orchestrator | ok: [testbed-manager] 2026-03-30 00:18:05.060913 | orchestrator | 2026-03-30 00:18:05.060953 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-30 00:18:05.060967 | orchestrator | 2026-03-30 00:18:05.060978 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-30 00:18:05.116638 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:18:05.116754 | orchestrator | 2026-03-30 00:18:05.116781 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-30 00:19:05.170125 | orchestrator | Pausing for 60 seconds 2026-03-30 00:19:05.170291 | orchestrator | changed: [testbed-manager] 2026-03-30 00:19:05.170323 | orchestrator | 2026-03-30 00:19:05.170345 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-30 00:19:08.811516 | orchestrator | changed: [testbed-manager] 2026-03-30 00:19:08.811606 | orchestrator | 2026-03-30 00:19:08.811618 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-30 00:19:50.315868 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-30 00:19:50.315951 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-30 00:19:50.315959 | orchestrator | changed: [testbed-manager] 2026-03-30 00:19:50.315985 | orchestrator | 2026-03-30 00:19:50.315991 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-30 00:19:55.991233 | orchestrator | changed: [testbed-manager] 2026-03-30 00:19:55.991342 | orchestrator | 2026-03-30 00:19:55.991360 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-30 00:19:56.074467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-30 00:19:56.074557 | orchestrator | 2026-03-30 00:19:56.074573 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-30 00:19:56.074586 | orchestrator | 2026-03-30 00:19:56.074597 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-30 00:19:56.123783 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:19:56.123866 | orchestrator | 2026-03-30 00:19:56.123881 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-30 00:19:56.186292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-30 00:19:56.186426 | orchestrator | 2026-03-30 00:19:56.186456 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-30 00:19:56.936395 | orchestrator | changed: [testbed-manager] 2026-03-30 00:19:56.936495 | orchestrator | 2026-03-30 00:19:56.936514 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-30 00:19:59.971224 | orchestrator | ok: [testbed-manager] 2026-03-30 00:19:59.971343 | orchestrator | 2026-03-30 00:19:59.971369 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-30 00:20:00.044040 | orchestrator | ok: [testbed-manager] => { 2026-03-30 00:20:00.044232 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-30 00:20:00.044265 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-30 00:20:00.044287 | orchestrator | "Checking running containers against expected versions...", 2026-03-30 00:20:00.044310 | orchestrator | "", 2026-03-30 00:20:00.044333 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-30 00:20:00.044353 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-30 00:20:00.044372 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.044392 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-30 00:20:00.044411 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.044430 | orchestrator | "", 2026-03-30 00:20:00.044449 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-30 00:20:00.044468 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-30 00:20:00.044488 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.044507 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-30 00:20:00.044526 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.044546 | orchestrator | "", 2026-03-30 00:20:00.044565 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-30 00:20:00.044585 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-30 00:20:00.044604 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.044624 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-30 00:20:00.044644 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.044662 | orchestrator | "", 2026-03-30 00:20:00.044682 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-30 00:20:00.044701 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-30 00:20:00.044721 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.044742 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-30 00:20:00.044761 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.044779 | orchestrator | "", 2026-03-30 00:20:00.044800 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-30 00:20:00.044818 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-30 00:20:00.044890 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.044911 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-30 00:20:00.044931 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.044949 | orchestrator | "", 2026-03-30 00:20:00.044967 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-30 00:20:00.044987 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045005 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045024 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045043 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045061 | orchestrator | "", 2026-03-30 00:20:00.045099 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-30 00:20:00.045120 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-30 00:20:00.045139 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045157 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-30 00:20:00.045175 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045193 | orchestrator | "", 2026-03-30 00:20:00.045213 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-30 00:20:00.045231 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-30 00:20:00.045249 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045267 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-30 00:20:00.045285 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045303 | orchestrator | "", 2026-03-30 00:20:00.045334 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-30 00:20:00.045354 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-30 00:20:00.045378 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045397 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-30 00:20:00.045416 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045434 | orchestrator | "", 2026-03-30 00:20:00.045453 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-30 00:20:00.045470 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-30 00:20:00.045488 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045506 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-30 00:20:00.045524 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045542 | orchestrator | "", 2026-03-30 00:20:00.045562 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-30 00:20:00.045580 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045598 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045616 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045635 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045653 | orchestrator | "", 2026-03-30 00:20:00.045672 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-30 00:20:00.045690 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045708 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045726 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045744 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045763 | orchestrator | "", 2026-03-30 00:20:00.045782 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-30 00:20:00.045800 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045818 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045837 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045855 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045874 | orchestrator | "", 2026-03-30 00:20:00.045893 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-30 00:20:00.045911 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045929 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.045947 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.045979 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.045999 | orchestrator | "", 2026-03-30 00:20:00.046110 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-30 00:20:00.046161 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.046183 | orchestrator | " Enabled: true", 2026-03-30 00:20:00.046202 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-30 00:20:00.046221 | orchestrator | " Status: ✅ MATCH", 2026-03-30 00:20:00.046239 | orchestrator | "", 2026-03-30 00:20:00.046257 | orchestrator | "=== Summary ===", 2026-03-30 00:20:00.046276 | orchestrator | "Errors (version mismatches): 0", 2026-03-30 00:20:00.046296 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-30 00:20:00.046315 | orchestrator | "", 2026-03-30 00:20:00.046333 | orchestrator | "✅ All running containers match expected versions!" 2026-03-30 00:20:00.046351 | orchestrator | ] 2026-03-30 00:20:00.046370 | orchestrator | } 2026-03-30 00:20:00.046390 | orchestrator | 2026-03-30 00:20:00.046410 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-30 00:20:00.102664 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:20:00.102740 | orchestrator | 2026-03-30 00:20:00.102750 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:20:00.102758 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-30 00:20:00.102765 | orchestrator | 2026-03-30 00:20:00.204970 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-30 00:20:00.205058 | orchestrator | + deactivate 2026-03-30 00:20:00.205109 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-30 00:20:00.205135 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-30 00:20:00.205154 | orchestrator | + export PATH 2026-03-30 00:20:00.205172 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-30 00:20:00.205191 | orchestrator | + '[' -n '' ']' 2026-03-30 00:20:00.205211 | orchestrator | + hash -r 2026-03-30 00:20:00.205229 | orchestrator | + '[' -n '' ']' 2026-03-30 00:20:00.205247 | orchestrator | + unset VIRTUAL_ENV 2026-03-30 00:20:00.205263 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-30 00:20:00.205275 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-30 00:20:00.205286 | orchestrator | + unset -f deactivate 2026-03-30 00:20:00.205380 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-30 00:20:00.214073 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-30 00:20:00.214161 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-30 00:20:00.214174 | orchestrator | + local max_attempts=60 2026-03-30 00:20:00.214187 | orchestrator | + local name=ceph-ansible 2026-03-30 00:20:00.214198 | orchestrator | + local attempt_num=1 2026-03-30 00:20:00.214608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:20:00.246305 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:20:00.246392 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-30 00:20:00.246408 | orchestrator | + local max_attempts=60 2026-03-30 00:20:00.246420 | orchestrator | + local name=kolla-ansible 2026-03-30 00:20:00.246431 | orchestrator | + local attempt_num=1 2026-03-30 00:20:00.246454 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-30 00:20:00.275408 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:20:00.275492 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-30 00:20:00.275507 | orchestrator | + local max_attempts=60 2026-03-30 00:20:00.275518 | orchestrator | + local name=osism-ansible 2026-03-30 00:20:00.275528 | orchestrator | + local attempt_num=1 2026-03-30 00:20:00.276234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-30 00:20:00.310774 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:20:00.310876 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-30 00:20:00.310891 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-30 00:20:01.000554 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-30 00:20:01.180997 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-30 00:20:01.181184 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181206 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181218 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-30 00:20:01.181231 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-30 00:20:01.181243 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181254 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181265 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-03-30 00:20:01.181293 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181304 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-30 00:20:01.181316 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181326 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-30 00:20:01.181337 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181348 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-30 00:20:01.181359 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.181370 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-30 00:20:01.187379 | orchestrator | ++ semver latest 7.0.0 2026-03-30 00:20:01.232764 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 00:20:01.232854 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 00:20:01.232871 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-30 00:20:01.236560 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-30 00:20:13.682265 | orchestrator | 2026-03-30 00:20:13 | INFO  | Prepare task for execution of resolvconf. 2026-03-30 00:20:13.881522 | orchestrator | 2026-03-30 00:20:13 | INFO  | Task 236be53b-5ca9-4580-ad83-21e08a25250a (resolvconf) was prepared for execution. 2026-03-30 00:20:13.881655 | orchestrator | 2026-03-30 00:20:13 | INFO  | It takes a moment until task 236be53b-5ca9-4580-ad83-21e08a25250a (resolvconf) has been started and output is visible here. 2026-03-30 00:20:26.450768 | orchestrator | 2026-03-30 00:20:26.450855 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-30 00:20:26.450865 | orchestrator | 2026-03-30 00:20:26.450872 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:20:26.450879 | orchestrator | Monday 30 March 2026 00:20:17 +0000 (0:00:00.160) 0:00:00.160 ********** 2026-03-30 00:20:26.450886 | orchestrator | ok: [testbed-manager] 2026-03-30 00:20:26.450894 | orchestrator | 2026-03-30 00:20:26.450900 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-30 00:20:26.450907 | orchestrator | Monday 30 March 2026 00:20:20 +0000 (0:00:03.480) 0:00:03.640 ********** 2026-03-30 00:20:26.450914 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:20:26.450921 | orchestrator | 2026-03-30 00:20:26.450927 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-30 00:20:26.450933 | orchestrator | Monday 30 March 2026 00:20:20 +0000 (0:00:00.072) 0:00:03.712 ********** 2026-03-30 00:20:26.450940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-30 00:20:26.450947 | orchestrator | 2026-03-30 00:20:26.450955 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-30 00:20:26.450966 | orchestrator | Monday 30 March 2026 00:20:20 +0000 (0:00:00.085) 0:00:03.798 ********** 2026-03-30 00:20:26.450985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-30 00:20:26.450997 | orchestrator | 2026-03-30 00:20:26.451007 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-30 00:20:26.451017 | orchestrator | Monday 30 March 2026 00:20:20 +0000 (0:00:00.070) 0:00:03.868 ********** 2026-03-30 00:20:26.451026 | orchestrator | ok: [testbed-manager] 2026-03-30 00:20:26.451036 | orchestrator | 2026-03-30 00:20:26.451045 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-30 00:20:26.451056 | orchestrator | Monday 30 March 2026 00:20:21 +0000 (0:00:01.109) 0:00:04.978 ********** 2026-03-30 00:20:26.451065 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:20:26.451098 | orchestrator | 2026-03-30 00:20:26.451109 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-30 00:20:26.451120 | orchestrator | Monday 30 March 2026 00:20:21 +0000 (0:00:00.067) 0:00:05.046 ********** 2026-03-30 00:20:26.451130 | orchestrator | ok: [testbed-manager] 2026-03-30 00:20:26.451140 | orchestrator | 2026-03-30 00:20:26.451150 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-30 00:20:26.451160 | orchestrator | Monday 30 March 2026 00:20:22 +0000 (0:00:00.561) 0:00:05.607 ********** 2026-03-30 00:20:26.451170 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:20:26.451180 | orchestrator | 2026-03-30 00:20:26.451191 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-30 00:20:26.451202 | orchestrator | Monday 30 March 2026 00:20:22 +0000 (0:00:00.081) 0:00:05.689 ********** 2026-03-30 00:20:26.451208 | orchestrator | changed: [testbed-manager] 2026-03-30 00:20:26.451215 | orchestrator | 2026-03-30 00:20:26.451221 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-30 00:20:26.451227 | orchestrator | Monday 30 March 2026 00:20:23 +0000 (0:00:00.634) 0:00:06.323 ********** 2026-03-30 00:20:26.451233 | orchestrator | changed: [testbed-manager] 2026-03-30 00:20:26.451240 | orchestrator | 2026-03-30 00:20:26.451246 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-30 00:20:26.451252 | orchestrator | Monday 30 March 2026 00:20:24 +0000 (0:00:01.153) 0:00:07.477 ********** 2026-03-30 00:20:26.451258 | orchestrator | ok: [testbed-manager] 2026-03-30 00:20:26.451264 | orchestrator | 2026-03-30 00:20:26.451289 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-30 00:20:26.451296 | orchestrator | Monday 30 March 2026 00:20:25 +0000 (0:00:00.913) 0:00:08.390 ********** 2026-03-30 00:20:26.451302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-30 00:20:26.451308 | orchestrator | 2026-03-30 00:20:26.451314 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-30 00:20:26.451320 | orchestrator | Monday 30 March 2026 00:20:25 +0000 (0:00:00.065) 0:00:08.456 ********** 2026-03-30 00:20:26.451326 | orchestrator | changed: [testbed-manager] 2026-03-30 00:20:26.451332 | orchestrator | 2026-03-30 00:20:26.451340 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:20:26.451349 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:20:26.451356 | orchestrator | 2026-03-30 00:20:26.451363 | orchestrator | 2026-03-30 00:20:26.451370 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:20:26.451378 | orchestrator | Monday 30 March 2026 00:20:26 +0000 (0:00:01.001) 0:00:09.457 ********** 2026-03-30 00:20:26.451385 | orchestrator | =============================================================================== 2026-03-30 00:20:26.451392 | orchestrator | Gathering Facts --------------------------------------------------------- 3.48s 2026-03-30 00:20:26.451399 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.15s 2026-03-30 00:20:26.451406 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2026-03-30 00:20:26.451412 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.00s 2026-03-30 00:20:26.451419 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.91s 2026-03-30 00:20:26.451426 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2026-03-30 00:20:26.451447 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-03-30 00:20:26.451455 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-30 00:20:26.451462 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-30 00:20:26.451469 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-30 00:20:26.451476 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-30 00:20:26.451484 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-30 00:20:26.451491 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-30 00:20:26.563118 | orchestrator | + osism apply sshconfig 2026-03-30 00:20:37.763714 | orchestrator | 2026-03-30 00:20:37 | INFO  | Prepare task for execution of sshconfig. 2026-03-30 00:20:37.838332 | orchestrator | 2026-03-30 00:20:37 | INFO  | Task b78b4b80-ce0d-4657-933d-aa1bd173cffd (sshconfig) was prepared for execution. 2026-03-30 00:20:37.838437 | orchestrator | 2026-03-30 00:20:37 | INFO  | It takes a moment until task b78b4b80-ce0d-4657-933d-aa1bd173cffd (sshconfig) has been started and output is visible here. 2026-03-30 00:20:48.218620 | orchestrator | 2026-03-30 00:20:48.218715 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-30 00:20:48.218732 | orchestrator | 2026-03-30 00:20:48.218745 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-30 00:20:48.218757 | orchestrator | Monday 30 March 2026 00:20:40 +0000 (0:00:00.190) 0:00:00.190 ********** 2026-03-30 00:20:48.218768 | orchestrator | ok: [testbed-manager] 2026-03-30 00:20:48.218780 | orchestrator | 2026-03-30 00:20:48.218791 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-30 00:20:48.218802 | orchestrator | Monday 30 March 2026 00:20:41 +0000 (0:00:00.971) 0:00:01.162 ********** 2026-03-30 00:20:48.218836 | orchestrator | changed: [testbed-manager] 2026-03-30 00:20:48.218848 | orchestrator | 2026-03-30 00:20:48.218859 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-30 00:20:48.218870 | orchestrator | Monday 30 March 2026 00:20:42 +0000 (0:00:00.493) 0:00:01.655 ********** 2026-03-30 00:20:48.218881 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-30 00:20:48.218892 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-30 00:20:48.218903 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-30 00:20:48.218913 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-30 00:20:48.218924 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-30 00:20:48.218934 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-30 00:20:48.218945 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-30 00:20:48.218955 | orchestrator | 2026-03-30 00:20:48.218966 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-30 00:20:48.218977 | orchestrator | Monday 30 March 2026 00:20:47 +0000 (0:00:05.100) 0:00:06.755 ********** 2026-03-30 00:20:48.218987 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:20:48.218998 | orchestrator | 2026-03-30 00:20:48.219008 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-30 00:20:48.219019 | orchestrator | Monday 30 March 2026 00:20:47 +0000 (0:00:00.081) 0:00:06.837 ********** 2026-03-30 00:20:48.219030 | orchestrator | changed: [testbed-manager] 2026-03-30 00:20:48.219040 | orchestrator | 2026-03-30 00:20:48.219052 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:20:48.219064 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:20:48.219103 | orchestrator | 2026-03-30 00:20:48.219117 | orchestrator | 2026-03-30 00:20:48.219135 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:20:48.219153 | orchestrator | Monday 30 March 2026 00:20:48 +0000 (0:00:00.463) 0:00:07.301 ********** 2026-03-30 00:20:48.219171 | orchestrator | =============================================================================== 2026-03-30 00:20:48.219190 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.10s 2026-03-30 00:20:48.219210 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.97s 2026-03-30 00:20:48.219228 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-03-30 00:20:48.219247 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.46s 2026-03-30 00:20:48.219262 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-30 00:20:48.329453 | orchestrator | + osism apply known-hosts 2026-03-30 00:20:59.487153 | orchestrator | 2026-03-30 00:20:59 | INFO  | Prepare task for execution of known-hosts. 2026-03-30 00:20:59.562983 | orchestrator | 2026-03-30 00:20:59 | INFO  | Task a8c216b1-78c6-4766-bc0b-68cad73b29ab (known-hosts) was prepared for execution. 2026-03-30 00:20:59.563043 | orchestrator | 2026-03-30 00:20:59 | INFO  | It takes a moment until task a8c216b1-78c6-4766-bc0b-68cad73b29ab (known-hosts) has been started and output is visible here. 2026-03-30 00:21:14.992824 | orchestrator | 2026-03-30 00:21:14.992930 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-30 00:21:14.992947 | orchestrator | 2026-03-30 00:21:14.992959 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-30 00:21:14.992971 | orchestrator | Monday 30 March 2026 00:21:02 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-03-30 00:21:14.992983 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-30 00:21:14.992994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-30 00:21:14.993006 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-30 00:21:14.993040 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-30 00:21:14.993052 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-30 00:21:14.993091 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-30 00:21:14.993103 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-30 00:21:14.993114 | orchestrator | 2026-03-30 00:21:14.993130 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-30 00:21:14.993150 | orchestrator | Monday 30 March 2026 00:21:09 +0000 (0:00:06.498) 0:00:06.687 ********** 2026-03-30 00:21:14.993189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-30 00:21:14.993219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-30 00:21:14.993238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-30 00:21:14.993257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-30 00:21:14.993308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-30 00:21:14.993328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-30 00:21:14.993347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-30 00:21:14.993367 | orchestrator | 2026-03-30 00:21:14.993387 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:14.993408 | orchestrator | Monday 30 March 2026 00:21:09 +0000 (0:00:00.159) 0:00:06.846 ********** 2026-03-30 00:21:14.993427 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOx4qU0afg9CWogCXzyc3inFFl97d4JWcfAgY/tQfvNb) 2026-03-30 00:21:14.993446 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCghPBND+DLkj7VCSD9ABgwn+QH54K5mIp5p+RJlXbKLwNIhAUy6BtB5/Il7V/N9MK5Y2CE7gyrUT7Nog+A7PgHhlnHQnLr6PYMGAX6UH2XG5MRJaDkldc0S8jQcDJrQGsK8qWecxR6+FHJFdAXNB+LXlQMHArAL9R39bIFulq+8snNjYuOQMvUj9GQkg2zGcxJnlPFdLNeY9h4wDhsq2BSljsBkwAtkypTe+wGqwoKzgexasO/p1uEkQQ1CDaOLC0lGdOuiHwMMThaoMcRzHb5Hz87ml9S+lvf01hfsL+7BURowhxIyGZyZJX1uq8t/Mc2v6tkyk+SVJq2iNxtXxQQkKcI7FZ+vx5NatsGR2mMsSdUvXMFQqX543Hn5nFj7GjTA7XNdQT9+sKXgZjJNI1u1AR/TCe8FS1yjf5FBKWj0z3iCnZMdF7OtGSZQvYLtWuXsnWlTBG4e/qCOawPk8hl1l3CwkLfNqF3JjgMxemOxW/ujSA0WNN9NoIvXvZNFhU=) 2026-03-30 00:21:14.993463 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMvGNl+XH0Bai0IaAZbLIXqumG3LPeZv+KBy7KxG1Rti1eEUeAy9iaJE+b6qvNwRlmLO/xA/We7GCXDR7svLbxc=) 2026-03-30 00:21:14.993477 | orchestrator | 2026-03-30 00:21:14.993489 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:14.993502 | orchestrator | Monday 30 March 2026 00:21:10 +0000 (0:00:01.220) 0:00:08.067 ********** 2026-03-30 00:21:14.993543 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3d2M5SxPd9MJMu7zqxE4ZxKix/bMkqzK/CYgzfOhyp4+zWsXtCR2F9dEwNAkf+szVzrcJXy1g3jv6QWwW7PAqG+TqoMk/TPE04i7WllEj0Ia3ECAmvd/upY9vq35hNzY4MPIq4dX4Wu6pbBduIinkFm8hId5m1UXRZ0BCoN0sm9DlncrGIAQWDVHT5KXIqzLv3ft2PVf7tgDEg2BEvsdbvoVbFiTRarUfzfh+Jx6+/25+2lTN6e01sFUT0KzadjXtUQ657JJD2lwApHPhNgeONMZEoIOaJ2LK8ZnJDIAJleQY22M2vs+ll+JApVK/2FpxYi5BuFZ3XjoT/3iIuYVsvTjMaoh8W2+YRYRpBVL0T3QN2WgfJkXZg33y7iYkBoqPo3B+sv6eWAL9UR94HpcpdhaxwWXh/Z7t9ZjB4vOJoP96ZdvqPmlGjg4F8iwgc8YaJ5t4ibwfitgMiUVoFwL3Uaxr2jHC1+Go17ET5MBIfTAuaTRCyLWUSjTYjPoRB8M=) 2026-03-30 00:21:14.993569 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB39tpmlhwZ00rLmFQ8tLKR/Lel6pZUevmYzGkTb3xb131COKnQ1quqMlFzWqBmE7NMEXlW4iz8fohCHAg7TQV4=) 2026-03-30 00:21:14.993582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFutLCr0+csU56ATWZ+BP7AVCCE3HUX9t+cKu+71wBfy) 2026-03-30 00:21:14.993595 | orchestrator | 2026-03-30 00:21:14.993607 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:14.993620 | orchestrator | Monday 30 March 2026 00:21:11 +0000 (0:00:01.012) 0:00:09.079 ********** 2026-03-30 00:21:14.993633 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASWvQFwChNkrREx/LqemtQ2e8Rql+0XCCn0AYtDdPODRmFolxN/hlKWbAciGkzBQhkFE6WJ9OxGb3d9n1zgSEY=) 2026-03-30 00:21:14.993647 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB7ZmAFPktXPn2eGr/Qn0o9/PA5O9rE941soUTX6NTV4mMPS/JpE4yyF76qCWiwVKYY33gX4WvJYoMjZ+szUV6XF+yDx8HitorwDqbZdbqW/GRfP4/Z73WSL49U+7U+xb4mBSwlR0qa+MLQNfINFTSsHPBYmjCW3pmMt7Y6gp/GtJzDNtyS4Bp75hrdB+bQ4uynBO5SAjMOSm8U7XwMGqXiORoqTMOnbNmb6Mwf5v3kJO2IRl6kJYY0nIxkwJ6SD7Gp5Tg/5pINZQ1jVdLUVbAu0VVOazw3RIWoNl0HsRGd7o+565hxOwwSdo2G0+1AIoLiJEREJ3z+21g67MAcnQbDESUeysCvjJhjdnUuRR4/xCGqBz8RKsXW0k6stmP9FazbxGxVLBBUTizjm1mu12zK9LFocF9v/Ghw+YB4nHRfleQE/q7ssx/gf4kJXfJBwFiWTLzg3lP47tK4Kr9jZuXEMO/hIyd32tHRv90SqlyMHdaACwge9QTFEG8HSbGQR8=) 2026-03-30 00:21:14.993744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZsYuGkUkYeLkILU6Zchlp1jodcth+SbKFmeRSmcl7/) 2026-03-30 00:21:14.993765 | orchestrator | 2026-03-30 00:21:14.993784 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:14.993800 | orchestrator | Monday 30 March 2026 00:21:12 +0000 (0:00:01.016) 0:00:10.096 ********** 2026-03-30 00:21:14.993817 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqoMyiJDFci3nZhMyYjCQMhOEWC3qU9UMyEFHB6SRAFPvZ8EicUlNX91qd5R47mb5/r9WDVvyyTAF+HdcNc8HQ=) 2026-03-30 00:21:14.993836 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM9MrbaienFtyEtsinANne78kXwRYQPNGT5fMPx2r4v9) 2026-03-30 00:21:14.993856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWRt+BlWN3zmG0EL9iuI7RO2MOD3aiTUmyXCYfvJ7jKFdnD2LjJJdd8sO7kAY/Ck1uxgEdw1fbeXXEJC1E4fDEkLxtjYLZdt9Wo2XYc5fB3bNTt0kZOKw1KqCPSBrf2YrNr4LBTIdXTlntkZEmQaoKkrSwmc2XyZhIvnuOog//iBPZiLvFrVlY53cZwj5/woX6FZYrCglkccj6Bnb0XEpJDSquRFkOv+IfbKiJQ5EflR2d89J4gKhBtH/4P+H3JD+RJzaTu9lkSRxGtOM7C0WScaVTlJndYx8ShxoOZvrITY8R/upKe2lcGKGjozi0rzQqbYPT/6hLMo6aAzJKZ9P4Q1PIte+uiS1QfnwJf/rbmZA9univRb+6L3Fbh27Kr+lx+nqqk8WiCpUpBGKag0b3p582J54gzigpeFPyk4JZ9lAkc5T8bqgv2hL3EQnHr1TMBlx2UWeMAoL+lrxo9jgg/iK30JItxuTb/nFFngmzyUly17zWWWC82vw/XOGNauk=) 2026-03-30 00:21:14.993875 | orchestrator | 2026-03-30 00:21:14.993893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:14.993914 | orchestrator | Monday 30 March 2026 00:21:13 +0000 (0:00:01.035) 0:00:11.132 ********** 2026-03-30 00:21:14.993934 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6m3mOI65tiDPfOtJaIBbjSuGUhr0SxXn3BC5qL44QAtzThvTdjQuWqlj5cV8OwrW7OZr8nRliBNt5wu8gmP5kwj+RRLkh9Uaijuw/xVqBTNamxg/98jyBRYzbwFiTC3AIROO+IXymlGzKDTGj79svzNq/8n2sbW/dhRzqIyOFQvELm/6OZhxU5j7mlh5YHd5DEry2quJJXlw4o8q79/oeTY6SmL8RlAtmXInMvh/J+FBKDqp7FJ2PKNgGvwyplO0g9OjbZ5Sb33QT5c6VfpVSkdeiBRaYoAzTRpMBG2s40oFhYH4ZRotXULfBDcX/xTJ/NY0GM/HOOuTNIeyh50dzOKK9JkknwCFz7/cJW6XriZ6zEtuZ1i2pKHC6OClOEPOg9sKqOCtGuGRI64pFHV8jiTdAmHkk5escOQEEj1bP4LiXPBY/5WD9h7YSzEtYnGPzyXw8+yBDuIJ17cu0tWPyqoIFwqRYd7UdP1slLcOopwIp80BoZecyrZpODKSAP4k=) 2026-03-30 00:21:14.993965 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASXkd6QtB4Dd7Gvh7T3JQSAXiP8Lam9ezZFmNgv6uA2sjeWZtgnjsE3nS/DT2X4JqIJDBQCWlctdMjT489uvYE=) 2026-03-30 00:21:14.993977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINJzaESuRGUleZ8rrs9vYo6l9ePzk1+Sk25gDVrqfCI8) 2026-03-30 00:21:14.993988 | orchestrator | 2026-03-30 00:21:14.993999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:14.994010 | orchestrator | Monday 30 March 2026 00:21:14 +0000 (0:00:01.029) 0:00:12.161 ********** 2026-03-30 00:21:14.994125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwctVSU5ey+ZBN/Kqp18AfKbrgtspF6ENDp+sv0kgQmMF99jA59RklbG0laIjqu4VpBHYSbbwfGMruqYAiW/84fNvRulHvDT8RWI35Zri3wgxt0cOYQesNpe5xwemcvCFpB6qDjSYhgyf8ygNwsOMb5kYZZK+uNHdoU2vJR2MkDOyxF1Ugl8Jv7c64lFckneSL1jDsvzglWMy0BlxxxNif8pKwILVt2UUlnTzdWiB8Z/PqIJCj9vNV3HIRuhbhIqCworp8ieTb6daffDHjEDdsuALl+hTk2uxNzOUmsGbG0aL75lEADVf8YrbzRrBzBrscZHJ7vBHjIdx4ATtyIFjIuRzZ6OD2wFRpzzVp4j4G+IbDVG/YAU3ildPbjhMbwojZx52lSZiZ1p2FHMKM7xTdQnXT6UtN2x93DhRnftuK9j03L2n991J7/jffXbBBk49+3KWHOlhBCBVKXM1WrEBp4r1P6ztUUkn2cXfNSTWlxilbU8KbRPxAzSfJxipvaGs=) 2026-03-30 00:21:25.731752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMIpj4zipPuNldmoTdLxUxnu1RzXdA63bFRhYhwm/XSo0GqCKDlPa9qCwOatAb7JUaDnitscV08XjnsK80sYE7M=) 2026-03-30 00:21:25.731861 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvKTzbA6aEXHPg9W6MfD+lJtaDfFnkcrEHKwWGXoFlx) 2026-03-30 00:21:25.731879 | orchestrator | 2026-03-30 00:21:25.731892 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:25.731905 | orchestrator | Monday 30 March 2026 00:21:15 +0000 (0:00:01.027) 0:00:13.188 ********** 2026-03-30 00:21:25.731917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOeKeODDvtoc3AHtk57RFSZGBPcKUCP0uUd0FjCRJbwbIcinc+lsOcrZpruEhwn2e6Zxbvgufo/4oUXcFyB3HHQ=) 2026-03-30 00:21:25.731929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFcEOOCP2PXYx7JJh+fF6pkRYHOQPHjy0tfrBeNkQ/st) 2026-03-30 00:21:25.731942 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1Ujuh4/Iv50mrgGlrTMlTYBrlZY4OWM9JiVAK8pIc6xaZY7alD8u9xMEy+OSi14ArRhzqlvXSpb1b2KCORH/SkbykMAqRxApaYozp1Fn5r2rdK9AZTn4kYU5JHWd1FsuGZg0UVHMTA3wAoNyfhKFNDMYpl2JdmCRlLSePBAbSoEfn92JWMbS9LgoaJ98PUzP4O1IcqMJimiVtQ4YfjuK/L3WBp7+AFEu5ywklfi4VZZ01cGnxEbJBaxvAf5BDbGdXTaTQp5ATqKSx+x7Z6HQvPbXJioephNYfDrf3KLbKocjN/9w1QUccB44YiBOOBUseaFI74eSCbQ1gWmJzSn5pqG3fiE5V1Br6yCIyxzqInGg8nS4GSii7EszT4lthViwUayaDE7P2baYRNIWGXzxfOXeSE/tMoPWc9sPAY9awVlfvbL2tGX4acDW/gE227co6fiW0irjFOnTgQB2gZhXbAvHYyxaCkiCGc/BZGBFEl0Jx3EP1XWLnKM+sUYnibr0=) 2026-03-30 00:21:25.731956 | orchestrator | 2026-03-30 00:21:25.731968 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-30 00:21:25.731980 | orchestrator | Monday 30 March 2026 00:21:16 +0000 (0:00:01.040) 0:00:14.229 ********** 2026-03-30 00:21:25.731992 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-30 00:21:25.732004 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-30 00:21:25.732015 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-30 00:21:25.732026 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-30 00:21:25.732038 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-30 00:21:25.732091 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-30 00:21:25.732111 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-30 00:21:25.732148 | orchestrator | 2026-03-30 00:21:25.732166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-30 00:21:25.732186 | orchestrator | Monday 30 March 2026 00:21:21 +0000 (0:00:05.156) 0:00:19.385 ********** 2026-03-30 00:21:25.732205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-30 00:21:25.732224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-30 00:21:25.732241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-30 00:21:25.732259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-30 00:21:25.732277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-30 00:21:25.732295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-30 00:21:25.732313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-30 00:21:25.732333 | orchestrator | 2026-03-30 00:21:25.732352 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:25.732372 | orchestrator | Monday 30 March 2026 00:21:21 +0000 (0:00:00.154) 0:00:19.539 ********** 2026-03-30 00:21:25.732391 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMvGNl+XH0Bai0IaAZbLIXqumG3LPeZv+KBy7KxG1Rti1eEUeAy9iaJE+b6qvNwRlmLO/xA/We7GCXDR7svLbxc=) 2026-03-30 00:21:25.732440 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCghPBND+DLkj7VCSD9ABgwn+QH54K5mIp5p+RJlXbKLwNIhAUy6BtB5/Il7V/N9MK5Y2CE7gyrUT7Nog+A7PgHhlnHQnLr6PYMGAX6UH2XG5MRJaDkldc0S8jQcDJrQGsK8qWecxR6+FHJFdAXNB+LXlQMHArAL9R39bIFulq+8snNjYuOQMvUj9GQkg2zGcxJnlPFdLNeY9h4wDhsq2BSljsBkwAtkypTe+wGqwoKzgexasO/p1uEkQQ1CDaOLC0lGdOuiHwMMThaoMcRzHb5Hz87ml9S+lvf01hfsL+7BURowhxIyGZyZJX1uq8t/Mc2v6tkyk+SVJq2iNxtXxQQkKcI7FZ+vx5NatsGR2mMsSdUvXMFQqX543Hn5nFj7GjTA7XNdQT9+sKXgZjJNI1u1AR/TCe8FS1yjf5FBKWj0z3iCnZMdF7OtGSZQvYLtWuXsnWlTBG4e/qCOawPk8hl1l3CwkLfNqF3JjgMxemOxW/ujSA0WNN9NoIvXvZNFhU=) 2026-03-30 00:21:25.732463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOx4qU0afg9CWogCXzyc3inFFl97d4JWcfAgY/tQfvNb) 2026-03-30 00:21:25.732482 | orchestrator | 2026-03-30 00:21:25.732500 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:25.732519 | orchestrator | Monday 30 March 2026 00:21:22 +0000 (0:00:00.938) 0:00:20.478 ********** 2026-03-30 00:21:25.732539 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB39tpmlhwZ00rLmFQ8tLKR/Lel6pZUevmYzGkTb3xb131COKnQ1quqMlFzWqBmE7NMEXlW4iz8fohCHAg7TQV4=) 2026-03-30 00:21:25.732558 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFutLCr0+csU56ATWZ+BP7AVCCE3HUX9t+cKu+71wBfy) 2026-03-30 00:21:25.732577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3d2M5SxPd9MJMu7zqxE4ZxKix/bMkqzK/CYgzfOhyp4+zWsXtCR2F9dEwNAkf+szVzrcJXy1g3jv6QWwW7PAqG+TqoMk/TPE04i7WllEj0Ia3ECAmvd/upY9vq35hNzY4MPIq4dX4Wu6pbBduIinkFm8hId5m1UXRZ0BCoN0sm9DlncrGIAQWDVHT5KXIqzLv3ft2PVf7tgDEg2BEvsdbvoVbFiTRarUfzfh+Jx6+/25+2lTN6e01sFUT0KzadjXtUQ657JJD2lwApHPhNgeONMZEoIOaJ2LK8ZnJDIAJleQY22M2vs+ll+JApVK/2FpxYi5BuFZ3XjoT/3iIuYVsvTjMaoh8W2+YRYRpBVL0T3QN2WgfJkXZg33y7iYkBoqPo3B+sv6eWAL9UR94HpcpdhaxwWXh/Z7t9ZjB4vOJoP96ZdvqPmlGjg4F8iwgc8YaJ5t4ibwfitgMiUVoFwL3Uaxr2jHC1+Go17ET5MBIfTAuaTRCyLWUSjTYjPoRB8M=) 2026-03-30 00:21:25.732602 | orchestrator | 2026-03-30 00:21:25.732613 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:25.732625 | orchestrator | Monday 30 March 2026 00:21:23 +0000 (0:00:00.947) 0:00:21.425 ********** 2026-03-30 00:21:25.732636 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASWvQFwChNkrREx/LqemtQ2e8Rql+0XCCn0AYtDdPODRmFolxN/hlKWbAciGkzBQhkFE6WJ9OxGb3d9n1zgSEY=) 2026-03-30 00:21:25.732648 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB7ZmAFPktXPn2eGr/Qn0o9/PA5O9rE941soUTX6NTV4mMPS/JpE4yyF76qCWiwVKYY33gX4WvJYoMjZ+szUV6XF+yDx8HitorwDqbZdbqW/GRfP4/Z73WSL49U+7U+xb4mBSwlR0qa+MLQNfINFTSsHPBYmjCW3pmMt7Y6gp/GtJzDNtyS4Bp75hrdB+bQ4uynBO5SAjMOSm8U7XwMGqXiORoqTMOnbNmb6Mwf5v3kJO2IRl6kJYY0nIxkwJ6SD7Gp5Tg/5pINZQ1jVdLUVbAu0VVOazw3RIWoNl0HsRGd7o+565hxOwwSdo2G0+1AIoLiJEREJ3z+21g67MAcnQbDESUeysCvjJhjdnUuRR4/xCGqBz8RKsXW0k6stmP9FazbxGxVLBBUTizjm1mu12zK9LFocF9v/Ghw+YB4nHRfleQE/q7ssx/gf4kJXfJBwFiWTLzg3lP47tK4Kr9jZuXEMO/hIyd32tHRv90SqlyMHdaACwge9QTFEG8HSbGQR8=) 2026-03-30 00:21:25.732661 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZsYuGkUkYeLkILU6Zchlp1jodcth+SbKFmeRSmcl7/) 2026-03-30 00:21:25.732671 | orchestrator | 2026-03-30 00:21:25.732682 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:25.732693 | orchestrator | Monday 30 March 2026 00:21:24 +0000 (0:00:00.942) 0:00:22.367 ********** 2026-03-30 00:21:25.732714 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWRt+BlWN3zmG0EL9iuI7RO2MOD3aiTUmyXCYfvJ7jKFdnD2LjJJdd8sO7kAY/Ck1uxgEdw1fbeXXEJC1E4fDEkLxtjYLZdt9Wo2XYc5fB3bNTt0kZOKw1KqCPSBrf2YrNr4LBTIdXTlntkZEmQaoKkrSwmc2XyZhIvnuOog//iBPZiLvFrVlY53cZwj5/woX6FZYrCglkccj6Bnb0XEpJDSquRFkOv+IfbKiJQ5EflR2d89J4gKhBtH/4P+H3JD+RJzaTu9lkSRxGtOM7C0WScaVTlJndYx8ShxoOZvrITY8R/upKe2lcGKGjozi0rzQqbYPT/6hLMo6aAzJKZ9P4Q1PIte+uiS1QfnwJf/rbmZA9univRb+6L3Fbh27Kr+lx+nqqk8WiCpUpBGKag0b3p582J54gzigpeFPyk4JZ9lAkc5T8bqgv2hL3EQnHr1TMBlx2UWeMAoL+lrxo9jgg/iK30JItxuTb/nFFngmzyUly17zWWWC82vw/XOGNauk=) 2026-03-30 00:21:25.732725 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqoMyiJDFci3nZhMyYjCQMhOEWC3qU9UMyEFHB6SRAFPvZ8EicUlNX91qd5R47mb5/r9WDVvyyTAF+HdcNc8HQ=) 2026-03-30 00:21:25.732752 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM9MrbaienFtyEtsinANne78kXwRYQPNGT5fMPx2r4v9) 2026-03-30 00:21:29.458293 | orchestrator | 2026-03-30 00:21:29.458375 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:29.458385 | orchestrator | Monday 30 March 2026 00:21:25 +0000 (0:00:00.946) 0:00:23.314 ********** 2026-03-30 00:21:29.458408 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6m3mOI65tiDPfOtJaIBbjSuGUhr0SxXn3BC5qL44QAtzThvTdjQuWqlj5cV8OwrW7OZr8nRliBNt5wu8gmP5kwj+RRLkh9Uaijuw/xVqBTNamxg/98jyBRYzbwFiTC3AIROO+IXymlGzKDTGj79svzNq/8n2sbW/dhRzqIyOFQvELm/6OZhxU5j7mlh5YHd5DEry2quJJXlw4o8q79/oeTY6SmL8RlAtmXInMvh/J+FBKDqp7FJ2PKNgGvwyplO0g9OjbZ5Sb33QT5c6VfpVSkdeiBRaYoAzTRpMBG2s40oFhYH4ZRotXULfBDcX/xTJ/NY0GM/HOOuTNIeyh50dzOKK9JkknwCFz7/cJW6XriZ6zEtuZ1i2pKHC6OClOEPOg9sKqOCtGuGRI64pFHV8jiTdAmHkk5escOQEEj1bP4LiXPBY/5WD9h7YSzEtYnGPzyXw8+yBDuIJ17cu0tWPyqoIFwqRYd7UdP1slLcOopwIp80BoZecyrZpODKSAP4k=) 2026-03-30 00:21:29.458419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASXkd6QtB4Dd7Gvh7T3JQSAXiP8Lam9ezZFmNgv6uA2sjeWZtgnjsE3nS/DT2X4JqIJDBQCWlctdMjT489uvYE=) 2026-03-30 00:21:29.458448 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINJzaESuRGUleZ8rrs9vYo6l9ePzk1+Sk25gDVrqfCI8) 2026-03-30 00:21:29.458458 | orchestrator | 2026-03-30 00:21:29.458465 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:29.458473 | orchestrator | Monday 30 March 2026 00:21:26 +0000 (0:00:00.946) 0:00:24.260 ********** 2026-03-30 00:21:29.458480 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMIpj4zipPuNldmoTdLxUxnu1RzXdA63bFRhYhwm/XSo0GqCKDlPa9qCwOatAb7JUaDnitscV08XjnsK80sYE7M=) 2026-03-30 00:21:29.458489 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvKTzbA6aEXHPg9W6MfD+lJtaDfFnkcrEHKwWGXoFlx) 2026-03-30 00:21:29.458498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwctVSU5ey+ZBN/Kqp18AfKbrgtspF6ENDp+sv0kgQmMF99jA59RklbG0laIjqu4VpBHYSbbwfGMruqYAiW/84fNvRulHvDT8RWI35Zri3wgxt0cOYQesNpe5xwemcvCFpB6qDjSYhgyf8ygNwsOMb5kYZZK+uNHdoU2vJR2MkDOyxF1Ugl8Jv7c64lFckneSL1jDsvzglWMy0BlxxxNif8pKwILVt2UUlnTzdWiB8Z/PqIJCj9vNV3HIRuhbhIqCworp8ieTb6daffDHjEDdsuALl+hTk2uxNzOUmsGbG0aL75lEADVf8YrbzRrBzBrscZHJ7vBHjIdx4ATtyIFjIuRzZ6OD2wFRpzzVp4j4G+IbDVG/YAU3ildPbjhMbwojZx52lSZiZ1p2FHMKM7xTdQnXT6UtN2x93DhRnftuK9j03L2n991J7/jffXbBBk49+3KWHOlhBCBVKXM1WrEBp4r1P6ztUUkn2cXfNSTWlxilbU8KbRPxAzSfJxipvaGs=) 2026-03-30 00:21:29.458507 | orchestrator | 2026-03-30 00:21:29.458515 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-30 00:21:29.458522 | orchestrator | Monday 30 March 2026 00:21:27 +0000 (0:00:00.959) 0:00:25.219 ********** 2026-03-30 00:21:29.458529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFcEOOCP2PXYx7JJh+fF6pkRYHOQPHjy0tfrBeNkQ/st) 2026-03-30 00:21:29.458536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1Ujuh4/Iv50mrgGlrTMlTYBrlZY4OWM9JiVAK8pIc6xaZY7alD8u9xMEy+OSi14ArRhzqlvXSpb1b2KCORH/SkbykMAqRxApaYozp1Fn5r2rdK9AZTn4kYU5JHWd1FsuGZg0UVHMTA3wAoNyfhKFNDMYpl2JdmCRlLSePBAbSoEfn92JWMbS9LgoaJ98PUzP4O1IcqMJimiVtQ4YfjuK/L3WBp7+AFEu5ywklfi4VZZ01cGnxEbJBaxvAf5BDbGdXTaTQp5ATqKSx+x7Z6HQvPbXJioephNYfDrf3KLbKocjN/9w1QUccB44YiBOOBUseaFI74eSCbQ1gWmJzSn5pqG3fiE5V1Br6yCIyxzqInGg8nS4GSii7EszT4lthViwUayaDE7P2baYRNIWGXzxfOXeSE/tMoPWc9sPAY9awVlfvbL2tGX4acDW/gE227co6fiW0irjFOnTgQB2gZhXbAvHYyxaCkiCGc/BZGBFEl0Jx3EP1XWLnKM+sUYnibr0=) 2026-03-30 00:21:29.458543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOeKeODDvtoc3AHtk57RFSZGBPcKUCP0uUd0FjCRJbwbIcinc+lsOcrZpruEhwn2e6Zxbvgufo/4oUXcFyB3HHQ=) 2026-03-30 00:21:29.458549 | orchestrator | 2026-03-30 00:21:29.458556 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-30 00:21:29.458563 | orchestrator | Monday 30 March 2026 00:21:28 +0000 (0:00:00.981) 0:00:26.201 ********** 2026-03-30 00:21:29.458570 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-30 00:21:29.458576 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-30 00:21:29.458583 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-30 00:21:29.458589 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-30 00:21:29.458596 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-30 00:21:29.458602 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-30 00:21:29.458608 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-30 00:21:29.458615 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:21:29.458621 | orchestrator | 2026-03-30 00:21:29.458643 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-30 00:21:29.458650 | orchestrator | Monday 30 March 2026 00:21:28 +0000 (0:00:00.163) 0:00:26.364 ********** 2026-03-30 00:21:29.458662 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:21:29.458669 | orchestrator | 2026-03-30 00:21:29.458675 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-30 00:21:29.458681 | orchestrator | Monday 30 March 2026 00:21:28 +0000 (0:00:00.045) 0:00:26.410 ********** 2026-03-30 00:21:29.458688 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:21:29.458694 | orchestrator | 2026-03-30 00:21:29.458701 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-30 00:21:29.458707 | orchestrator | Monday 30 March 2026 00:21:28 +0000 (0:00:00.043) 0:00:26.453 ********** 2026-03-30 00:21:29.458714 | orchestrator | changed: [testbed-manager] 2026-03-30 00:21:29.458721 | orchestrator | 2026-03-30 00:21:29.458727 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:21:29.458733 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:21:29.458742 | orchestrator | 2026-03-30 00:21:29.458748 | orchestrator | 2026-03-30 00:21:29.458755 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:21:29.458761 | orchestrator | Monday 30 March 2026 00:21:29 +0000 (0:00:00.425) 0:00:26.879 ********** 2026-03-30 00:21:29.458768 | orchestrator | =============================================================================== 2026-03-30 00:21:29.458774 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.50s 2026-03-30 00:21:29.458781 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2026-03-30 00:21:29.458788 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-30 00:21:29.458795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-30 00:21:29.458802 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-30 00:21:29.458808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-30 00:21:29.458814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-30 00:21:29.458821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-30 00:21:29.458827 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-30 00:21:29.458833 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-30 00:21:29.458840 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-03-30 00:21:29.458853 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-30 00:21:29.458860 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-30 00:21:29.458867 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-30 00:21:29.458873 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-03-30 00:21:29.458883 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-03-30 00:21:29.458890 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.43s 2026-03-30 00:21:29.458897 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-30 00:21:29.458904 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-30 00:21:29.458911 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.15s 2026-03-30 00:21:29.577663 | orchestrator | + osism apply squid 2026-03-30 00:21:40.722561 | orchestrator | 2026-03-30 00:21:40 | INFO  | Prepare task for execution of squid. 2026-03-30 00:21:40.791400 | orchestrator | 2026-03-30 00:21:40 | INFO  | Task 7fa852a1-eed4-4f1a-94ba-3e2b4e71da7d (squid) was prepared for execution. 2026-03-30 00:21:40.791488 | orchestrator | 2026-03-30 00:21:40 | INFO  | It takes a moment until task 7fa852a1-eed4-4f1a-94ba-3e2b4e71da7d (squid) has been started and output is visible here. 2026-03-30 00:23:36.650326 | orchestrator | 2026-03-30 00:23:36.650417 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-30 00:23:36.650427 | orchestrator | 2026-03-30 00:23:36.650435 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-30 00:23:36.650442 | orchestrator | Monday 30 March 2026 00:21:43 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-03-30 00:23:36.650450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-30 00:23:36.650457 | orchestrator | 2026-03-30 00:23:36.650463 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-30 00:23:36.650469 | orchestrator | Monday 30 March 2026 00:21:43 +0000 (0:00:00.068) 0:00:00.245 ********** 2026-03-30 00:23:36.650476 | orchestrator | ok: [testbed-manager] 2026-03-30 00:23:36.650483 | orchestrator | 2026-03-30 00:23:36.650490 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-30 00:23:36.650496 | orchestrator | Monday 30 March 2026 00:21:45 +0000 (0:00:01.989) 0:00:02.234 ********** 2026-03-30 00:23:36.650503 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-30 00:23:36.650510 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-30 00:23:36.650517 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-30 00:23:36.650523 | orchestrator | 2026-03-30 00:23:36.650529 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-30 00:23:36.650535 | orchestrator | Monday 30 March 2026 00:21:46 +0000 (0:00:01.117) 0:00:03.352 ********** 2026-03-30 00:23:36.650541 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-30 00:23:36.650547 | orchestrator | 2026-03-30 00:23:36.650553 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-30 00:23:36.650560 | orchestrator | Monday 30 March 2026 00:21:47 +0000 (0:00:01.046) 0:00:04.398 ********** 2026-03-30 00:23:36.650566 | orchestrator | ok: [testbed-manager] 2026-03-30 00:23:36.650572 | orchestrator | 2026-03-30 00:23:36.650578 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-30 00:23:36.650600 | orchestrator | Monday 30 March 2026 00:21:48 +0000 (0:00:00.330) 0:00:04.729 ********** 2026-03-30 00:23:36.650607 | orchestrator | changed: [testbed-manager] 2026-03-30 00:23:36.650613 | orchestrator | 2026-03-30 00:23:36.650619 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-30 00:23:36.650625 | orchestrator | Monday 30 March 2026 00:21:49 +0000 (0:00:00.916) 0:00:05.646 ********** 2026-03-30 00:23:36.650632 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-30 00:23:36.650639 | orchestrator | ok: [testbed-manager] 2026-03-30 00:23:36.650645 | orchestrator | 2026-03-30 00:23:36.650651 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-30 00:23:36.650656 | orchestrator | Monday 30 March 2026 00:22:19 +0000 (0:00:30.593) 0:00:36.239 ********** 2026-03-30 00:23:36.650662 | orchestrator | changed: [testbed-manager] 2026-03-30 00:23:36.650669 | orchestrator | 2026-03-30 00:23:36.650675 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-30 00:23:36.650682 | orchestrator | Monday 30 March 2026 00:22:35 +0000 (0:00:16.007) 0:00:52.247 ********** 2026-03-30 00:23:36.650688 | orchestrator | Pausing for 60 seconds 2026-03-30 00:23:36.650695 | orchestrator | changed: [testbed-manager] 2026-03-30 00:23:36.650701 | orchestrator | 2026-03-30 00:23:36.650707 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-30 00:23:36.650714 | orchestrator | Monday 30 March 2026 00:23:35 +0000 (0:01:00.090) 0:01:52.338 ********** 2026-03-30 00:23:36.650720 | orchestrator | ok: [testbed-manager] 2026-03-30 00:23:36.650726 | orchestrator | 2026-03-30 00:23:36.650731 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-30 00:23:36.650758 | orchestrator | Monday 30 March 2026 00:23:35 +0000 (0:00:00.072) 0:01:52.410 ********** 2026-03-30 00:23:36.650765 | orchestrator | changed: [testbed-manager] 2026-03-30 00:23:36.650772 | orchestrator | 2026-03-30 00:23:36.650778 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:23:36.650783 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:23:36.650789 | orchestrator | 2026-03-30 00:23:36.650795 | orchestrator | 2026-03-30 00:23:36.650802 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:23:36.650808 | orchestrator | Monday 30 March 2026 00:23:36 +0000 (0:00:00.570) 0:01:52.981 ********** 2026-03-30 00:23:36.650814 | orchestrator | =============================================================================== 2026-03-30 00:23:36.650820 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-30 00:23:36.650826 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.59s 2026-03-30 00:23:36.650833 | orchestrator | osism.services.squid : Restart squid service --------------------------- 16.01s 2026-03-30 00:23:36.650839 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.99s 2026-03-30 00:23:36.650845 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2026-03-30 00:23:36.650851 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2026-03-30 00:23:36.650857 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2026-03-30 00:23:36.650863 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2026-03-30 00:23:36.650868 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-03-30 00:23:36.650874 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-30 00:23:36.650880 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-30 00:23:36.811815 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 00:23:36.811866 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-30 00:23:36.819055 | orchestrator | + set -e 2026-03-30 00:23:36.819084 | orchestrator | + NAMESPACE=kolla 2026-03-30 00:23:36.819092 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-30 00:23:36.823229 | orchestrator | ++ semver latest 9.0.0 2026-03-30 00:23:36.881598 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-30 00:23:36.881682 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 00:23:36.882394 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-30 00:23:48.268536 | orchestrator | 2026-03-30 00:23:48 | INFO  | Prepare task for execution of operator. 2026-03-30 00:23:48.343476 | orchestrator | 2026-03-30 00:23:48 | INFO  | Task 08715c56-a8f8-4e10-b46f-cf96e7dcc6d7 (operator) was prepared for execution. 2026-03-30 00:23:48.343573 | orchestrator | 2026-03-30 00:23:48 | INFO  | It takes a moment until task 08715c56-a8f8-4e10-b46f-cf96e7dcc6d7 (operator) has been started and output is visible here. 2026-03-30 00:24:03.323709 | orchestrator | 2026-03-30 00:24:03.323825 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-30 00:24:03.323842 | orchestrator | 2026-03-30 00:24:03.323852 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 00:24:03.323863 | orchestrator | Monday 30 March 2026 00:23:51 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-03-30 00:24:03.323873 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:03.323885 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:24:03.323895 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:03.323905 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:03.323915 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:24:03.323924 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:24:03.323937 | orchestrator | 2026-03-30 00:24:03.323948 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-30 00:24:03.323979 | orchestrator | Monday 30 March 2026 00:23:54 +0000 (0:00:03.353) 0:00:03.534 ********** 2026-03-30 00:24:03.323989 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:03.323999 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:03.324008 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:03.324051 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:24:03.324061 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:24:03.324070 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:24:03.324080 | orchestrator | 2026-03-30 00:24:03.324089 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-30 00:24:03.324099 | orchestrator | 2026-03-30 00:24:03.324109 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-30 00:24:03.324119 | orchestrator | Monday 30 March 2026 00:23:55 +0000 (0:00:00.885) 0:00:04.420 ********** 2026-03-30 00:24:03.324129 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:24:03.324138 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:24:03.324148 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:24:03.324157 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:03.324167 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:03.324176 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:03.324186 | orchestrator | 2026-03-30 00:24:03.324195 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-30 00:24:03.324222 | orchestrator | Monday 30 March 2026 00:23:55 +0000 (0:00:00.152) 0:00:04.573 ********** 2026-03-30 00:24:03.324233 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:24:03.324242 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:24:03.324252 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:24:03.324263 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:03.324274 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:03.324285 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:03.324296 | orchestrator | 2026-03-30 00:24:03.324309 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-30 00:24:03.324320 | orchestrator | Monday 30 March 2026 00:23:55 +0000 (0:00:00.146) 0:00:04.719 ********** 2026-03-30 00:24:03.324331 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:03.324344 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:03.324355 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:03.324366 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:03.324377 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:03.324388 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:03.324400 | orchestrator | 2026-03-30 00:24:03.324412 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-30 00:24:03.324423 | orchestrator | Monday 30 March 2026 00:23:56 +0000 (0:00:00.743) 0:00:05.463 ********** 2026-03-30 00:24:03.324434 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:03.324446 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:03.324457 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:03.324468 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:03.324480 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:03.324491 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:03.324502 | orchestrator | 2026-03-30 00:24:03.324513 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-30 00:24:03.324525 | orchestrator | Monday 30 March 2026 00:23:57 +0000 (0:00:00.882) 0:00:06.346 ********** 2026-03-30 00:24:03.324537 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-30 00:24:03.324548 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-30 00:24:03.324560 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-30 00:24:03.324571 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-30 00:24:03.324582 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-30 00:24:03.324593 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-30 00:24:03.324604 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-30 00:24:03.324615 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-30 00:24:03.324626 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-30 00:24:03.324643 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-30 00:24:03.324653 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-30 00:24:03.324663 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-30 00:24:03.324672 | orchestrator | 2026-03-30 00:24:03.324682 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-30 00:24:03.324692 | orchestrator | Monday 30 March 2026 00:23:58 +0000 (0:00:01.162) 0:00:07.508 ********** 2026-03-30 00:24:03.324702 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:03.324711 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:03.324721 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:03.324730 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:03.324740 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:03.324749 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:03.324759 | orchestrator | 2026-03-30 00:24:03.324768 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-30 00:24:03.324779 | orchestrator | Monday 30 March 2026 00:24:00 +0000 (0:00:01.247) 0:00:08.756 ********** 2026-03-30 00:24:03.324789 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:24:03.324798 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:24:03.324808 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:24:03.324818 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:24:03.324827 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:24:03.324856 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-30 00:24:03.324866 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-30 00:24:03.324875 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-30 00:24:03.324885 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-30 00:24:03.324895 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-30 00:24:03.324904 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-30 00:24:03.324914 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-30 00:24:03.324923 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:24:03.324933 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-30 00:24:03.324942 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-30 00:24:03.324957 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-30 00:24:03.324967 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:24:03.324976 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:24:03.324986 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:24:03.324995 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:24:03.325005 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-30 00:24:03.325028 | orchestrator | 2026-03-30 00:24:03.325039 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-30 00:24:03.325049 | orchestrator | Monday 30 March 2026 00:24:01 +0000 (0:00:01.329) 0:00:10.086 ********** 2026-03-30 00:24:03.325059 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:03.325069 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:03.325078 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:03.325088 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:03.325098 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:03.325107 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:03.325117 | orchestrator | 2026-03-30 00:24:03.325127 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-30 00:24:03.325143 | orchestrator | Monday 30 March 2026 00:24:01 +0000 (0:00:00.145) 0:00:10.231 ********** 2026-03-30 00:24:03.325152 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:03.325162 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:03.325172 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:03.325181 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:03.325191 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:03.325200 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:03.325210 | orchestrator | 2026-03-30 00:24:03.325220 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-30 00:24:03.325229 | orchestrator | Monday 30 March 2026 00:24:01 +0000 (0:00:00.170) 0:00:10.402 ********** 2026-03-30 00:24:03.325239 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:03.325249 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:03.325258 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:03.325268 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:03.325277 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:03.325287 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:03.325296 | orchestrator | 2026-03-30 00:24:03.325306 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-30 00:24:03.325316 | orchestrator | Monday 30 March 2026 00:24:02 +0000 (0:00:00.505) 0:00:10.908 ********** 2026-03-30 00:24:03.325325 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:03.325335 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:03.325344 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:03.325354 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:03.325363 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:03.325373 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:03.325383 | orchestrator | 2026-03-30 00:24:03.325392 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-30 00:24:03.325402 | orchestrator | Monday 30 March 2026 00:24:02 +0000 (0:00:00.164) 0:00:11.072 ********** 2026-03-30 00:24:03.325412 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 00:24:03.325421 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:03.325431 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 00:24:03.325441 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-30 00:24:03.325450 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:03.325460 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 00:24:03.325469 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:03.325479 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-30 00:24:03.325488 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:03.325498 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:03.325507 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 00:24:03.325517 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:03.325527 | orchestrator | 2026-03-30 00:24:03.325536 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-30 00:24:03.325546 | orchestrator | Monday 30 March 2026 00:24:03 +0000 (0:00:00.723) 0:00:11.796 ********** 2026-03-30 00:24:03.325556 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:03.325565 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:03.325575 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:03.325584 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:03.325594 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:03.325603 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:03.325613 | orchestrator | 2026-03-30 00:24:03.325623 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-30 00:24:03.325632 | orchestrator | Monday 30 March 2026 00:24:03 +0000 (0:00:00.144) 0:00:11.941 ********** 2026-03-30 00:24:03.325642 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:03.325651 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:03.325661 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:03.325670 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:03.325694 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:04.524856 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:04.524954 | orchestrator | 2026-03-30 00:24:04.524969 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-30 00:24:04.524980 | orchestrator | Monday 30 March 2026 00:24:03 +0000 (0:00:00.150) 0:00:12.092 ********** 2026-03-30 00:24:04.524989 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:04.524998 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:04.525007 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:04.525078 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:04.525087 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:04.525097 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:04.525105 | orchestrator | 2026-03-30 00:24:04.525113 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-30 00:24:04.525122 | orchestrator | Monday 30 March 2026 00:24:03 +0000 (0:00:00.158) 0:00:12.251 ********** 2026-03-30 00:24:04.525131 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:04.525139 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:04.525149 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:04.525155 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:04.525160 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:04.525165 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:04.525170 | orchestrator | 2026-03-30 00:24:04.525175 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-30 00:24:04.525181 | orchestrator | Monday 30 March 2026 00:24:04 +0000 (0:00:00.648) 0:00:12.899 ********** 2026-03-30 00:24:04.525186 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:24:04.525191 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:24:04.525196 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:24:04.525201 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:04.525206 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:04.525211 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:04.525216 | orchestrator | 2026-03-30 00:24:04.525221 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:24:04.525246 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 00:24:04.525252 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 00:24:04.525257 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 00:24:04.525262 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 00:24:04.525267 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 00:24:04.525272 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 00:24:04.525277 | orchestrator | 2026-03-30 00:24:04.525282 | orchestrator | 2026-03-30 00:24:04.525287 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:24:04.525292 | orchestrator | Monday 30 March 2026 00:24:04 +0000 (0:00:00.200) 0:00:13.100 ********** 2026-03-30 00:24:04.525297 | orchestrator | =============================================================================== 2026-03-30 00:24:04.525302 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2026-03-30 00:24:04.525308 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2026-03-30 00:24:04.525314 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2026-03-30 00:24:04.525336 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-03-30 00:24:04.525342 | orchestrator | Do not require tty for all users ---------------------------------------- 0.89s 2026-03-30 00:24:04.525347 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-03-30 00:24:04.525352 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.74s 2026-03-30 00:24:04.525357 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-03-30 00:24:04.525362 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-03-30 00:24:04.525367 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.51s 2026-03-30 00:24:04.525372 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-03-30 00:24:04.525377 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-30 00:24:04.525382 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-03-30 00:24:04.525387 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-30 00:24:04.525392 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-03-30 00:24:04.525397 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-30 00:24:04.525402 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-30 00:24:04.525407 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-30 00:24:04.525413 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-30 00:24:04.691488 | orchestrator | + osism apply --environment custom facts 2026-03-30 00:24:05.889505 | orchestrator | 2026-03-30 00:24:05 | INFO  | Trying to run play facts in environment custom 2026-03-30 00:24:16.052555 | orchestrator | 2026-03-30 00:24:16 | INFO  | Prepare task for execution of facts. 2026-03-30 00:24:16.129930 | orchestrator | 2026-03-30 00:24:16 | INFO  | Task 41f585df-fb86-4b7a-8bcc-4f4cb44d348c (facts) was prepared for execution. 2026-03-30 00:24:16.130110 | orchestrator | 2026-03-30 00:24:16 | INFO  | It takes a moment until task 41f585df-fb86-4b7a-8bcc-4f4cb44d348c (facts) has been started and output is visible here. 2026-03-30 00:24:58.546259 | orchestrator | 2026-03-30 00:24:58.546392 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-30 00:24:58.546413 | orchestrator | 2026-03-30 00:24:58.546433 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-30 00:24:58.546486 | orchestrator | Monday 30 March 2026 00:24:19 +0000 (0:00:00.113) 0:00:00.113 ********** 2026-03-30 00:24:58.546507 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:58.546528 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:58.546547 | orchestrator | ok: [testbed-manager] 2026-03-30 00:24:58.546567 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:58.546584 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:58.546604 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:58.546622 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:58.546637 | orchestrator | 2026-03-30 00:24:58.546649 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-30 00:24:58.546660 | orchestrator | Monday 30 March 2026 00:24:20 +0000 (0:00:01.441) 0:00:01.555 ********** 2026-03-30 00:24:58.546671 | orchestrator | ok: [testbed-manager] 2026-03-30 00:24:58.546681 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:58.546692 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:58.546703 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:24:58.546714 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:24:58.546725 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:24:58.546736 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:58.546747 | orchestrator | 2026-03-30 00:24:58.546782 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-30 00:24:58.546793 | orchestrator | 2026-03-30 00:24:58.546804 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-30 00:24:58.546815 | orchestrator | Monday 30 March 2026 00:24:21 +0000 (0:00:01.319) 0:00:02.874 ********** 2026-03-30 00:24:58.546826 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.546837 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.546847 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.546858 | orchestrator | 2026-03-30 00:24:58.546869 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-30 00:24:58.546881 | orchestrator | Monday 30 March 2026 00:24:22 +0000 (0:00:00.095) 0:00:02.970 ********** 2026-03-30 00:24:58.546891 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.546902 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.546912 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.546923 | orchestrator | 2026-03-30 00:24:58.546933 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-30 00:24:58.546944 | orchestrator | Monday 30 March 2026 00:24:22 +0000 (0:00:00.191) 0:00:03.161 ********** 2026-03-30 00:24:58.546954 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.546965 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.546976 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.546986 | orchestrator | 2026-03-30 00:24:58.547075 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-30 00:24:58.547091 | orchestrator | Monday 30 March 2026 00:24:22 +0000 (0:00:00.193) 0:00:03.356 ********** 2026-03-30 00:24:58.547104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:24:58.547116 | orchestrator | 2026-03-30 00:24:58.547126 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-30 00:24:58.547135 | orchestrator | Monday 30 March 2026 00:24:22 +0000 (0:00:00.113) 0:00:03.469 ********** 2026-03-30 00:24:58.547145 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.547154 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.547163 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.547173 | orchestrator | 2026-03-30 00:24:58.547182 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-30 00:24:58.547192 | orchestrator | Monday 30 March 2026 00:24:22 +0000 (0:00:00.435) 0:00:03.905 ********** 2026-03-30 00:24:58.547201 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:58.547210 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:58.547220 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:58.547229 | orchestrator | 2026-03-30 00:24:58.547239 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-30 00:24:58.547248 | orchestrator | Monday 30 March 2026 00:24:23 +0000 (0:00:00.127) 0:00:04.032 ********** 2026-03-30 00:24:58.547258 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:58.547267 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:58.547277 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:58.547286 | orchestrator | 2026-03-30 00:24:58.547296 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-30 00:24:58.547305 | orchestrator | Monday 30 March 2026 00:24:24 +0000 (0:00:01.024) 0:00:05.057 ********** 2026-03-30 00:24:58.547315 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.547324 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.547333 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.547343 | orchestrator | 2026-03-30 00:24:58.547353 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-30 00:24:58.547362 | orchestrator | Monday 30 March 2026 00:24:24 +0000 (0:00:00.439) 0:00:05.496 ********** 2026-03-30 00:24:58.547371 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:58.547381 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:58.547390 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:58.547400 | orchestrator | 2026-03-30 00:24:58.547418 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-30 00:24:58.547428 | orchestrator | Monday 30 March 2026 00:24:25 +0000 (0:00:01.124) 0:00:06.621 ********** 2026-03-30 00:24:58.547437 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:58.547447 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:58.547456 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:58.547465 | orchestrator | 2026-03-30 00:24:58.547475 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-30 00:24:58.547484 | orchestrator | Monday 30 March 2026 00:24:41 +0000 (0:00:15.876) 0:00:22.498 ********** 2026-03-30 00:24:58.547494 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:24:58.547503 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:24:58.547513 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:24:58.547522 | orchestrator | 2026-03-30 00:24:58.547532 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-30 00:24:58.547561 | orchestrator | Monday 30 March 2026 00:24:41 +0000 (0:00:00.088) 0:00:22.586 ********** 2026-03-30 00:24:58.547574 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:24:58.547590 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:24:58.547607 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:24:58.547623 | orchestrator | 2026-03-30 00:24:58.547640 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-30 00:24:58.547655 | orchestrator | Monday 30 March 2026 00:24:49 +0000 (0:00:07.976) 0:00:30.563 ********** 2026-03-30 00:24:58.547670 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.547685 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.547703 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.547719 | orchestrator | 2026-03-30 00:24:58.547736 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-30 00:24:58.547753 | orchestrator | Monday 30 March 2026 00:24:50 +0000 (0:00:00.419) 0:00:30.982 ********** 2026-03-30 00:24:58.547770 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-30 00:24:58.547788 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-30 00:24:58.547804 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-30 00:24:58.547821 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-30 00:24:58.547838 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-30 00:24:58.547855 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-30 00:24:58.547872 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-30 00:24:58.547889 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-30 00:24:58.547904 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-30 00:24:58.547918 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-30 00:24:58.547928 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-30 00:24:58.547938 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-30 00:24:58.547947 | orchestrator | 2026-03-30 00:24:58.547957 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-30 00:24:58.547966 | orchestrator | Monday 30 March 2026 00:24:53 +0000 (0:00:03.546) 0:00:34.529 ********** 2026-03-30 00:24:58.547976 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.547985 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.548020 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.548032 | orchestrator | 2026-03-30 00:24:58.548041 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-30 00:24:58.548051 | orchestrator | 2026-03-30 00:24:58.548060 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 00:24:58.548111 | orchestrator | Monday 30 March 2026 00:24:54 +0000 (0:00:01.274) 0:00:35.803 ********** 2026-03-30 00:24:58.548122 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:24:58.548141 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:24:58.548151 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:24:58.548161 | orchestrator | ok: [testbed-manager] 2026-03-30 00:24:58.548170 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:24:58.548180 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:24:58.548189 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:24:58.548199 | orchestrator | 2026-03-30 00:24:58.548209 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:24:58.548219 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:24:58.548230 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:24:58.548241 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:24:58.548251 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:24:58.548260 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:24:58.548270 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:24:58.548280 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:24:58.548289 | orchestrator | 2026-03-30 00:24:58.548299 | orchestrator | 2026-03-30 00:24:58.548309 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:24:58.548318 | orchestrator | Monday 30 March 2026 00:24:58 +0000 (0:00:03.636) 0:00:39.440 ********** 2026-03-30 00:24:58.548328 | orchestrator | =============================================================================== 2026-03-30 00:24:58.548338 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.88s 2026-03-30 00:24:58.548347 | orchestrator | Install required packages (Debian) -------------------------------------- 7.98s 2026-03-30 00:24:58.548357 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.64s 2026-03-30 00:24:58.548366 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2026-03-30 00:24:58.548375 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2026-03-30 00:24:58.548385 | orchestrator | Copy fact file ---------------------------------------------------------- 1.32s 2026-03-30 00:24:58.548404 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2026-03-30 00:24:58.726199 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2026-03-30 00:24:58.726309 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2026-03-30 00:24:58.726321 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-30 00:24:58.726331 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-03-30 00:24:58.726340 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2026-03-30 00:24:58.726348 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-03-30 00:24:58.726357 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-30 00:24:58.726366 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-30 00:24:58.726374 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-03-30 00:24:58.726384 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-30 00:24:58.726392 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-30 00:24:58.903740 | orchestrator | + osism apply bootstrap 2026-03-30 00:25:10.342276 | orchestrator | 2026-03-30 00:25:10 | INFO  | Prepare task for execution of bootstrap. 2026-03-30 00:25:10.417315 | orchestrator | 2026-03-30 00:25:10 | INFO  | Task 18635d81-e224-463f-adb5-c5d7ff856ec3 (bootstrap) was prepared for execution. 2026-03-30 00:25:10.417431 | orchestrator | 2026-03-30 00:25:10 | INFO  | It takes a moment until task 18635d81-e224-463f-adb5-c5d7ff856ec3 (bootstrap) has been started and output is visible here. 2026-03-30 00:25:25.519496 | orchestrator | 2026-03-30 00:25:25.519611 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-30 00:25:25.519628 | orchestrator | 2026-03-30 00:25:25.519641 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-30 00:25:25.519652 | orchestrator | Monday 30 March 2026 00:25:13 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-30 00:25:25.519664 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:25.519676 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:25.519687 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:25.519698 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:25.519708 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:25.519719 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:25.519730 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:25.519741 | orchestrator | 2026-03-30 00:25:25.519752 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-30 00:25:25.519762 | orchestrator | 2026-03-30 00:25:25.519773 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 00:25:25.519784 | orchestrator | Monday 30 March 2026 00:25:13 +0000 (0:00:00.211) 0:00:00.373 ********** 2026-03-30 00:25:25.519795 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:25.519807 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:25.519818 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:25.519829 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:25.519840 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:25.519850 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:25.519861 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:25.519872 | orchestrator | 2026-03-30 00:25:25.519883 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-30 00:25:25.519894 | orchestrator | 2026-03-30 00:25:25.519904 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 00:25:25.519915 | orchestrator | Monday 30 March 2026 00:25:18 +0000 (0:00:04.769) 0:00:05.142 ********** 2026-03-30 00:25:25.519927 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-30 00:25:25.519938 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-30 00:25:25.519949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-30 00:25:25.519960 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-30 00:25:25.519971 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-30 00:25:25.519981 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-30 00:25:25.520021 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-30 00:25:25.520035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-30 00:25:25.520047 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-30 00:25:25.520059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-30 00:25:25.520071 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-30 00:25:25.520084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-30 00:25:25.520097 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-30 00:25:25.520109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-30 00:25:25.520121 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-30 00:25:25.520133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-30 00:25:25.520172 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-30 00:25:25.520185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-30 00:25:25.520197 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:25.520209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-30 00:25:25.520221 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-30 00:25:25.520233 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-30 00:25:25.520245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-30 00:25:25.520257 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:25:25.520269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-30 00:25:25.520281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-30 00:25:25.520293 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-30 00:25:25.520305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-30 00:25:25.520317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-30 00:25:25.520330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:25:25.520341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-30 00:25:25.520353 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-30 00:25:25.520366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-30 00:25:25.520377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:25:25.520388 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-30 00:25:25.520399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-30 00:25:25.520409 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-30 00:25:25.520420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:25:25.520431 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:25:25.520442 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-30 00:25:25.520452 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-30 00:25:25.520464 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:25:25.520475 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-30 00:25:25.520485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-30 00:25:25.520496 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-30 00:25:25.520507 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:25:25.520518 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-30 00:25:25.520547 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-30 00:25:25.520558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-30 00:25:25.520569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-30 00:25:25.520580 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-30 00:25:25.520591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-30 00:25:25.520602 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-30 00:25:25.520612 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:25:25.520623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-30 00:25:25.520634 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:25:25.520645 | orchestrator | 2026-03-30 00:25:25.520656 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-30 00:25:25.520667 | orchestrator | 2026-03-30 00:25:25.520678 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-30 00:25:25.520688 | orchestrator | Monday 30 March 2026 00:25:19 +0000 (0:00:00.472) 0:00:05.615 ********** 2026-03-30 00:25:25.520699 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:25.520710 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:25.520729 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:25.520740 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:25.520751 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:25.520761 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:25.520772 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:25.520783 | orchestrator | 2026-03-30 00:25:25.520794 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-30 00:25:25.520805 | orchestrator | Monday 30 March 2026 00:25:20 +0000 (0:00:01.174) 0:00:06.789 ********** 2026-03-30 00:25:25.520815 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:25.520826 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:25.520837 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:25.520848 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:25.520859 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:25.520869 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:25.520880 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:25.520891 | orchestrator | 2026-03-30 00:25:25.520901 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-30 00:25:25.520912 | orchestrator | Monday 30 March 2026 00:25:21 +0000 (0:00:01.303) 0:00:08.092 ********** 2026-03-30 00:25:25.520924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:25:25.520938 | orchestrator | 2026-03-30 00:25:25.520949 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-30 00:25:25.520960 | orchestrator | Monday 30 March 2026 00:25:21 +0000 (0:00:00.244) 0:00:08.337 ********** 2026-03-30 00:25:25.520971 | orchestrator | changed: [testbed-manager] 2026-03-30 00:25:25.520982 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:25:25.521009 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:25.521020 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:25.521031 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:25.521042 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:25:25.521052 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:25:25.521063 | orchestrator | 2026-03-30 00:25:25.521074 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-30 00:25:25.521085 | orchestrator | Monday 30 March 2026 00:25:23 +0000 (0:00:01.429) 0:00:09.767 ********** 2026-03-30 00:25:25.521096 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:25.521109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:25:25.521121 | orchestrator | 2026-03-30 00:25:25.521132 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-30 00:25:25.521160 | orchestrator | Monday 30 March 2026 00:25:23 +0000 (0:00:00.268) 0:00:10.036 ********** 2026-03-30 00:25:25.521171 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:25.521182 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:25.521193 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:25:25.521208 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:25.521219 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:25:25.521230 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:25:25.521241 | orchestrator | 2026-03-30 00:25:25.521251 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-30 00:25:25.521262 | orchestrator | Monday 30 March 2026 00:25:24 +0000 (0:00:00.992) 0:00:11.028 ********** 2026-03-30 00:25:25.521273 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:25.521284 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:25:25.521295 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:25:25.521305 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:25:25.521316 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:25.521326 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:25.521344 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:25.521355 | orchestrator | 2026-03-30 00:25:25.521366 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-30 00:25:25.521377 | orchestrator | Monday 30 March 2026 00:25:24 +0000 (0:00:00.571) 0:00:11.600 ********** 2026-03-30 00:25:25.521387 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:25:25.521398 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:25:25.521409 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:25:25.521420 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:25:25.521430 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:25:25.521441 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:25:25.521452 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:25.521463 | orchestrator | 2026-03-30 00:25:25.521490 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-30 00:25:25.521514 | orchestrator | Monday 30 March 2026 00:25:25 +0000 (0:00:00.410) 0:00:12.010 ********** 2026-03-30 00:25:25.521526 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:25.521537 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:25:25.521555 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:25:36.575484 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:25:36.575600 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:25:36.575623 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:25:36.575645 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:25:36.575664 | orchestrator | 2026-03-30 00:25:36.575684 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-30 00:25:36.575704 | orchestrator | Monday 30 March 2026 00:25:25 +0000 (0:00:00.195) 0:00:12.205 ********** 2026-03-30 00:25:36.575719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:25:36.575747 | orchestrator | 2026-03-30 00:25:36.575758 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-30 00:25:36.575770 | orchestrator | Monday 30 March 2026 00:25:25 +0000 (0:00:00.294) 0:00:12.500 ********** 2026-03-30 00:25:36.575782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:25:36.575793 | orchestrator | 2026-03-30 00:25:36.575804 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-30 00:25:36.575815 | orchestrator | Monday 30 March 2026 00:25:26 +0000 (0:00:00.296) 0:00:12.796 ********** 2026-03-30 00:25:36.575826 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.575838 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.575849 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.575859 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.575870 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.575881 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.575891 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.575902 | orchestrator | 2026-03-30 00:25:36.575913 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-30 00:25:36.575924 | orchestrator | Monday 30 March 2026 00:25:27 +0000 (0:00:01.347) 0:00:14.144 ********** 2026-03-30 00:25:36.575936 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:36.575947 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:25:36.575958 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:25:36.575969 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:25:36.575979 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:25:36.576021 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:25:36.576035 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:25:36.576047 | orchestrator | 2026-03-30 00:25:36.576060 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-30 00:25:36.576099 | orchestrator | Monday 30 March 2026 00:25:27 +0000 (0:00:00.233) 0:00:14.377 ********** 2026-03-30 00:25:36.576112 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.576125 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.576137 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.576149 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.576161 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.576173 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.576186 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.576198 | orchestrator | 2026-03-30 00:25:36.576211 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-30 00:25:36.576224 | orchestrator | Monday 30 March 2026 00:25:28 +0000 (0:00:00.524) 0:00:14.902 ********** 2026-03-30 00:25:36.576236 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:36.576248 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:25:36.576261 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:25:36.576273 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:25:36.576286 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:25:36.576298 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:25:36.576310 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:25:36.576322 | orchestrator | 2026-03-30 00:25:36.576335 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-30 00:25:36.576350 | orchestrator | Monday 30 March 2026 00:25:28 +0000 (0:00:00.212) 0:00:15.114 ********** 2026-03-30 00:25:36.576362 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.576382 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:36.576393 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:36.576404 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:25:36.576414 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:36.576425 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:25:36.576435 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:25:36.576446 | orchestrator | 2026-03-30 00:25:36.576457 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-30 00:25:36.576468 | orchestrator | Monday 30 March 2026 00:25:29 +0000 (0:00:00.500) 0:00:15.615 ********** 2026-03-30 00:25:36.576479 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.576490 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:36.576500 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:36.576511 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:36.576521 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:25:36.576532 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:25:36.576542 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:25:36.576557 | orchestrator | 2026-03-30 00:25:36.576576 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-30 00:25:36.576593 | orchestrator | Monday 30 March 2026 00:25:30 +0000 (0:00:01.048) 0:00:16.663 ********** 2026-03-30 00:25:36.576611 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.576631 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.576650 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.576668 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.576687 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.576706 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.576724 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.576738 | orchestrator | 2026-03-30 00:25:36.576749 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-30 00:25:36.576760 | orchestrator | Monday 30 March 2026 00:25:31 +0000 (0:00:00.974) 0:00:17.638 ********** 2026-03-30 00:25:36.576805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:25:36.576818 | orchestrator | 2026-03-30 00:25:36.576829 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-30 00:25:36.576840 | orchestrator | Monday 30 March 2026 00:25:31 +0000 (0:00:00.298) 0:00:17.937 ********** 2026-03-30 00:25:36.576861 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:36.576872 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:36.576883 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:25:36.576893 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:36.576904 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:25:36.576914 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:36.576925 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:25:36.576936 | orchestrator | 2026-03-30 00:25:36.576947 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-30 00:25:36.576957 | orchestrator | Monday 30 March 2026 00:25:32 +0000 (0:00:01.174) 0:00:19.111 ********** 2026-03-30 00:25:36.576968 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.576979 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.577020 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.577032 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.577043 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577053 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.577064 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.577074 | orchestrator | 2026-03-30 00:25:36.577085 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-30 00:25:36.577096 | orchestrator | Monday 30 March 2026 00:25:32 +0000 (0:00:00.209) 0:00:19.321 ********** 2026-03-30 00:25:36.577107 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.577118 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.577128 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.577139 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.577149 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577160 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.577170 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.577181 | orchestrator | 2026-03-30 00:25:36.577191 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-30 00:25:36.577202 | orchestrator | Monday 30 March 2026 00:25:32 +0000 (0:00:00.201) 0:00:19.523 ********** 2026-03-30 00:25:36.577213 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.577224 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.577234 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.577245 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.577255 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577266 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.577276 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.577287 | orchestrator | 2026-03-30 00:25:36.577297 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-30 00:25:36.577308 | orchestrator | Monday 30 March 2026 00:25:33 +0000 (0:00:00.208) 0:00:19.732 ********** 2026-03-30 00:25:36.577320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:25:36.577333 | orchestrator | 2026-03-30 00:25:36.577343 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-30 00:25:36.577354 | orchestrator | Monday 30 March 2026 00:25:33 +0000 (0:00:00.259) 0:00:19.991 ********** 2026-03-30 00:25:36.577365 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.577375 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.577386 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.577397 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.577407 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577417 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.577428 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.577438 | orchestrator | 2026-03-30 00:25:36.577449 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-30 00:25:36.577460 | orchestrator | Monday 30 March 2026 00:25:33 +0000 (0:00:00.514) 0:00:20.506 ********** 2026-03-30 00:25:36.577471 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:25:36.577489 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:25:36.577500 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:25:36.577511 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:25:36.577522 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:25:36.577532 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:25:36.577543 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:25:36.577559 | orchestrator | 2026-03-30 00:25:36.577578 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-30 00:25:36.577596 | orchestrator | Monday 30 March 2026 00:25:34 +0000 (0:00:00.200) 0:00:20.706 ********** 2026-03-30 00:25:36.577614 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.577634 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:36.577652 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:36.577670 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577681 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:25:36.577692 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.577703 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.577714 | orchestrator | 2026-03-30 00:25:36.577725 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-30 00:25:36.577736 | orchestrator | Monday 30 March 2026 00:25:35 +0000 (0:00:00.983) 0:00:21.690 ********** 2026-03-30 00:25:36.577747 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.577757 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:25:36.577768 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:25:36.577779 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:25:36.577789 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:25:36.577800 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577811 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:25:36.577821 | orchestrator | 2026-03-30 00:25:36.577832 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-30 00:25:36.577843 | orchestrator | Monday 30 March 2026 00:25:35 +0000 (0:00:00.544) 0:00:22.235 ********** 2026-03-30 00:25:36.577854 | orchestrator | ok: [testbed-manager] 2026-03-30 00:25:36.577865 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:25:36.577876 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:25:36.577887 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:25:36.577906 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.093235 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.093347 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:26:17.093363 | orchestrator | 2026-03-30 00:26:17.093376 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-30 00:26:17.093388 | orchestrator | Monday 30 March 2026 00:25:36 +0000 (0:00:01.021) 0:00:23.256 ********** 2026-03-30 00:26:17.093400 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.093411 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.093422 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.093433 | orchestrator | changed: [testbed-manager] 2026-03-30 00:26:17.093444 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:26:17.093454 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:26:17.093465 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:26:17.093476 | orchestrator | 2026-03-30 00:26:17.093487 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-30 00:26:17.093498 | orchestrator | Monday 30 March 2026 00:25:53 +0000 (0:00:16.688) 0:00:39.944 ********** 2026-03-30 00:26:17.093510 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.093521 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.093532 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.093543 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.093553 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.093564 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.093575 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.093585 | orchestrator | 2026-03-30 00:26:17.093596 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-30 00:26:17.093607 | orchestrator | Monday 30 March 2026 00:25:53 +0000 (0:00:00.213) 0:00:40.158 ********** 2026-03-30 00:26:17.093618 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.093653 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.093664 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.093675 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.093686 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.093696 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.093707 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.093717 | orchestrator | 2026-03-30 00:26:17.093728 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-30 00:26:17.093739 | orchestrator | Monday 30 March 2026 00:25:53 +0000 (0:00:00.187) 0:00:40.346 ********** 2026-03-30 00:26:17.093750 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.093760 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.093771 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.093782 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.093792 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.093802 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.093813 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.093824 | orchestrator | 2026-03-30 00:26:17.093834 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-30 00:26:17.093845 | orchestrator | Monday 30 March 2026 00:25:53 +0000 (0:00:00.202) 0:00:40.548 ********** 2026-03-30 00:26:17.093857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:26:17.093871 | orchestrator | 2026-03-30 00:26:17.093901 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-30 00:26:17.093912 | orchestrator | Monday 30 March 2026 00:25:54 +0000 (0:00:00.259) 0:00:40.808 ********** 2026-03-30 00:26:17.093923 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.093934 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.093944 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.093955 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.093965 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.094088 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.094113 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.094141 | orchestrator | 2026-03-30 00:26:17.094158 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-30 00:26:17.094175 | orchestrator | Monday 30 March 2026 00:25:56 +0000 (0:00:01.877) 0:00:42.685 ********** 2026-03-30 00:26:17.094192 | orchestrator | changed: [testbed-manager] 2026-03-30 00:26:17.094211 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:26:17.094228 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:26:17.094247 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:26:17.094265 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:26:17.094281 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:26:17.094301 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:26:17.094312 | orchestrator | 2026-03-30 00:26:17.094323 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-30 00:26:17.094334 | orchestrator | Monday 30 March 2026 00:25:57 +0000 (0:00:01.088) 0:00:43.774 ********** 2026-03-30 00:26:17.094345 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.094356 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.094367 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.094377 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.094388 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.094399 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.094409 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.094420 | orchestrator | 2026-03-30 00:26:17.094431 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-30 00:26:17.094441 | orchestrator | Monday 30 March 2026 00:25:58 +0000 (0:00:01.654) 0:00:45.428 ********** 2026-03-30 00:26:17.094453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:26:17.094477 | orchestrator | 2026-03-30 00:26:17.094488 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-30 00:26:17.094500 | orchestrator | Monday 30 March 2026 00:25:59 +0000 (0:00:00.323) 0:00:45.752 ********** 2026-03-30 00:26:17.094511 | orchestrator | changed: [testbed-manager] 2026-03-30 00:26:17.094521 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:26:17.094532 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:26:17.094543 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:26:17.094553 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:26:17.094564 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:26:17.094574 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:26:17.094585 | orchestrator | 2026-03-30 00:26:17.094616 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-30 00:26:17.094627 | orchestrator | Monday 30 March 2026 00:26:00 +0000 (0:00:01.073) 0:00:46.826 ********** 2026-03-30 00:26:17.094638 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:26:17.094649 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:26:17.094660 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:26:17.094671 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:26:17.094681 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:26:17.094692 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:26:17.094703 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:26:17.094714 | orchestrator | 2026-03-30 00:26:17.094725 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-30 00:26:17.094735 | orchestrator | Monday 30 March 2026 00:26:00 +0000 (0:00:00.208) 0:00:47.034 ********** 2026-03-30 00:26:17.094747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:26:17.094758 | orchestrator | 2026-03-30 00:26:17.094769 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-30 00:26:17.094780 | orchestrator | Monday 30 March 2026 00:26:00 +0000 (0:00:00.273) 0:00:47.308 ********** 2026-03-30 00:26:17.094791 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.094801 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.094812 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.094823 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.094833 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.094844 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.094854 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.094865 | orchestrator | 2026-03-30 00:26:17.094876 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-30 00:26:17.094887 | orchestrator | Monday 30 March 2026 00:26:02 +0000 (0:00:01.839) 0:00:49.148 ********** 2026-03-30 00:26:17.094897 | orchestrator | changed: [testbed-manager] 2026-03-30 00:26:17.094908 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:26:17.094919 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:26:17.094930 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:26:17.094940 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:26:17.094951 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:26:17.094962 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:26:17.095000 | orchestrator | 2026-03-30 00:26:17.095017 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-30 00:26:17.095029 | orchestrator | Monday 30 March 2026 00:26:03 +0000 (0:00:01.128) 0:00:50.277 ********** 2026-03-30 00:26:17.095040 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:26:17.095051 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:26:17.095061 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:26:17.095072 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:26:17.095083 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:26:17.095094 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:26:17.095112 | orchestrator | changed: [testbed-manager] 2026-03-30 00:26:17.095122 | orchestrator | 2026-03-30 00:26:17.095133 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-30 00:26:17.095144 | orchestrator | Monday 30 March 2026 00:26:14 +0000 (0:00:11.064) 0:01:01.341 ********** 2026-03-30 00:26:17.095155 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.095166 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.095177 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.095187 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.095198 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.095209 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.095220 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.095230 | orchestrator | 2026-03-30 00:26:17.095241 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-30 00:26:17.095252 | orchestrator | Monday 30 March 2026 00:26:15 +0000 (0:00:00.915) 0:01:02.257 ********** 2026-03-30 00:26:17.095263 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.095274 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.095284 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.095295 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.095306 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.095317 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.095327 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.095338 | orchestrator | 2026-03-30 00:26:17.095354 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-30 00:26:17.095365 | orchestrator | Monday 30 March 2026 00:26:16 +0000 (0:00:00.857) 0:01:03.114 ********** 2026-03-30 00:26:17.095376 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.095387 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.095398 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.095409 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.095419 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.095430 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.095440 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.095451 | orchestrator | 2026-03-30 00:26:17.095462 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-30 00:26:17.095473 | orchestrator | Monday 30 March 2026 00:26:16 +0000 (0:00:00.167) 0:01:03.282 ********** 2026-03-30 00:26:17.095484 | orchestrator | ok: [testbed-manager] 2026-03-30 00:26:17.095495 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:26:17.095506 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:26:17.095516 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:26:17.095527 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:26:17.095537 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:26:17.095548 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:26:17.095559 | orchestrator | 2026-03-30 00:26:17.095569 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-30 00:26:17.095580 | orchestrator | Monday 30 March 2026 00:26:16 +0000 (0:00:00.165) 0:01:03.448 ********** 2026-03-30 00:26:17.095592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:26:17.095603 | orchestrator | 2026-03-30 00:26:17.095622 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-30 00:28:38.573504 | orchestrator | Monday 30 March 2026 00:26:17 +0000 (0:00:00.239) 0:01:03.687 ********** 2026-03-30 00:28:38.573605 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:38.573618 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.573627 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.573634 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.573642 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.573649 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.573657 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.573664 | orchestrator | 2026-03-30 00:28:38.573672 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-30 00:28:38.573701 | orchestrator | Monday 30 March 2026 00:26:19 +0000 (0:00:01.927) 0:01:05.615 ********** 2026-03-30 00:28:38.573708 | orchestrator | changed: [testbed-manager] 2026-03-30 00:28:38.573717 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:28:38.573724 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:28:38.573731 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:28:38.573738 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:28:38.573745 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:28:38.573752 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:28:38.573759 | orchestrator | 2026-03-30 00:28:38.573767 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-30 00:28:38.573775 | orchestrator | Monday 30 March 2026 00:26:19 +0000 (0:00:00.595) 0:01:06.211 ********** 2026-03-30 00:28:38.573782 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:38.573790 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.573797 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.573804 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.573811 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.573818 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.573825 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.573832 | orchestrator | 2026-03-30 00:28:38.573839 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-30 00:28:38.573846 | orchestrator | Monday 30 March 2026 00:26:19 +0000 (0:00:00.194) 0:01:06.405 ********** 2026-03-30 00:28:38.573853 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:38.573861 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.573868 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.573875 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.573882 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.573889 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.573896 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.573903 | orchestrator | 2026-03-30 00:28:38.573910 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-30 00:28:38.573917 | orchestrator | Monday 30 March 2026 00:26:21 +0000 (0:00:01.253) 0:01:07.659 ********** 2026-03-30 00:28:38.573924 | orchestrator | changed: [testbed-manager] 2026-03-30 00:28:38.573973 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:28:38.573980 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:28:38.573987 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:28:38.573994 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:28:38.574001 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:28:38.574009 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:28:38.574064 | orchestrator | 2026-03-30 00:28:38.574074 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-30 00:28:38.574082 | orchestrator | Monday 30 March 2026 00:26:22 +0000 (0:00:01.887) 0:01:09.546 ********** 2026-03-30 00:28:38.574090 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:38.574099 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.574107 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.574115 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.574124 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.574132 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.574140 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.574148 | orchestrator | 2026-03-30 00:28:38.574157 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-30 00:28:38.574165 | orchestrator | Monday 30 March 2026 00:26:25 +0000 (0:00:02.844) 0:01:12.391 ********** 2026-03-30 00:28:38.574173 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:38.574181 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.574190 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.574198 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.574206 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.574214 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.574222 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.574230 | orchestrator | 2026-03-30 00:28:38.574239 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-30 00:28:38.574266 | orchestrator | Monday 30 March 2026 00:27:03 +0000 (0:00:37.698) 0:01:50.089 ********** 2026-03-30 00:28:38.574275 | orchestrator | changed: [testbed-manager] 2026-03-30 00:28:38.574283 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:28:38.574292 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:28:38.574300 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:28:38.574308 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:28:38.574317 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:28:38.574325 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:28:38.574332 | orchestrator | 2026-03-30 00:28:38.574341 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-30 00:28:38.574349 | orchestrator | Monday 30 March 2026 00:28:23 +0000 (0:01:19.637) 0:03:09.726 ********** 2026-03-30 00:28:38.574357 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:38.574365 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.574373 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.574381 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.574389 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.574398 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.574407 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.574415 | orchestrator | 2026-03-30 00:28:38.574423 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-30 00:28:38.574432 | orchestrator | Monday 30 March 2026 00:28:25 +0000 (0:00:01.951) 0:03:11.677 ********** 2026-03-30 00:28:38.574441 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:38.574448 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:38.574455 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:38.574462 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:38.574469 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:38.574476 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:38.574483 | orchestrator | changed: [testbed-manager] 2026-03-30 00:28:38.574490 | orchestrator | 2026-03-30 00:28:38.574497 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-30 00:28:38.574504 | orchestrator | Monday 30 March 2026 00:28:37 +0000 (0:00:12.404) 0:03:24.082 ********** 2026-03-30 00:28:38.574535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-30 00:28:38.574552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-30 00:28:38.574562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-30 00:28:38.574571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-30 00:28:38.574584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-30 00:28:38.574591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-30 00:28:38.574602 | orchestrator | 2026-03-30 00:28:38.574609 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-30 00:28:38.574617 | orchestrator | Monday 30 March 2026 00:28:37 +0000 (0:00:00.359) 0:03:24.441 ********** 2026-03-30 00:28:38.574624 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-30 00:28:38.574631 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:38.574639 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-30 00:28:38.574646 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-30 00:28:38.574654 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:28:38.574661 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:28:38.574673 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-30 00:28:38.574680 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:28:38.574688 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-30 00:28:38.574706 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-30 00:28:38.574713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-30 00:28:38.574720 | orchestrator | 2026-03-30 00:28:38.574728 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-30 00:28:38.574735 | orchestrator | Monday 30 March 2026 00:28:38 +0000 (0:00:00.668) 0:03:25.110 ********** 2026-03-30 00:28:38.574751 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-30 00:28:38.574759 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-30 00:28:38.574766 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-30 00:28:38.574774 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-30 00:28:38.574781 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-30 00:28:38.574794 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-30 00:28:46.398614 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-30 00:28:46.398705 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-30 00:28:46.398714 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-30 00:28:46.398721 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-30 00:28:46.398729 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:46.398737 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-30 00:28:46.398744 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-30 00:28:46.398751 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-30 00:28:46.398775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-30 00:28:46.398783 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-30 00:28:46.398790 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-30 00:28:46.398796 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-30 00:28:46.398802 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-30 00:28:46.398809 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-30 00:28:46.398815 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-30 00:28:46.398822 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-30 00:28:46.398828 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-30 00:28:46.398834 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-30 00:28:46.398840 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-30 00:28:46.398846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-30 00:28:46.398853 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-30 00:28:46.398859 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:28:46.398865 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-30 00:28:46.398871 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-30 00:28:46.398877 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-30 00:28:46.398884 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-30 00:28:46.398890 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-30 00:28:46.398896 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-30 00:28:46.398902 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:28:46.398920 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-30 00:28:46.398951 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-30 00:28:46.398958 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-30 00:28:46.398964 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-30 00:28:46.398970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-30 00:28:46.398977 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-30 00:28:46.398983 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-30 00:28:46.398989 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-30 00:28:46.398995 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:28:46.399002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-30 00:28:46.399013 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-30 00:28:46.399023 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-30 00:28:46.399048 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-30 00:28:46.399059 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-30 00:28:46.399085 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-30 00:28:46.399096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-30 00:28:46.399105 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-30 00:28:46.399112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-30 00:28:46.399118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-30 00:28:46.399124 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-30 00:28:46.399130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-30 00:28:46.399136 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-30 00:28:46.399143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-30 00:28:46.399150 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-30 00:28:46.399157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-30 00:28:46.399165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-30 00:28:46.399172 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-30 00:28:46.399179 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-30 00:28:46.399185 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-30 00:28:46.399192 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-30 00:28:46.399199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-30 00:28:46.399206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-30 00:28:46.399213 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-30 00:28:46.399220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-30 00:28:46.399227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-30 00:28:46.399233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-30 00:28:46.399240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-30 00:28:46.399247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-30 00:28:46.399254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-30 00:28:46.399262 | orchestrator | 2026-03-30 00:28:46.399269 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-30 00:28:46.399276 | orchestrator | Monday 30 March 2026 00:28:43 +0000 (0:00:04.668) 0:03:29.779 ********** 2026-03-30 00:28:46.399283 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399327 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399337 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399347 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-30 00:28:46.399357 | orchestrator | 2026-03-30 00:28:46.399366 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-30 00:28:46.399375 | orchestrator | Monday 30 March 2026 00:28:45 +0000 (0:00:02.580) 0:03:32.359 ********** 2026-03-30 00:28:46.399385 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:46.399394 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:46.399403 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:46.399412 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:46.399421 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:28:46.399430 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:46.399440 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:28:46.399449 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:28:46.399459 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-30 00:28:46.399470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-30 00:28:46.399487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-30 00:28:59.123210 | orchestrator | 2026-03-30 00:28:59.123310 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-30 00:28:59.123323 | orchestrator | Monday 30 March 2026 00:28:46 +0000 (0:00:00.671) 0:03:33.030 ********** 2026-03-30 00:28:59.123333 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:59.123343 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:59.123354 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:59.123363 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:59.123372 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:28:59.123381 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:28:59.123390 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-30 00:28:59.123399 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:28:59.123408 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-30 00:28:59.123416 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-30 00:28:59.123425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-30 00:28:59.123434 | orchestrator | 2026-03-30 00:28:59.123443 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-30 00:28:59.123452 | orchestrator | Monday 30 March 2026 00:28:46 +0000 (0:00:00.497) 0:03:33.528 ********** 2026-03-30 00:28:59.123461 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-30 00:28:59.123469 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:59.123478 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-30 00:28:59.123487 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:28:59.123496 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-30 00:28:59.123527 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:28:59.123537 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-30 00:28:59.123545 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:28:59.123554 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-30 00:28:59.123562 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-30 00:28:59.123571 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-30 00:28:59.123579 | orchestrator | 2026-03-30 00:28:59.123588 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-30 00:28:59.123597 | orchestrator | Monday 30 March 2026 00:28:47 +0000 (0:00:00.655) 0:03:34.183 ********** 2026-03-30 00:28:59.123605 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:59.123614 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:28:59.123623 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:28:59.123632 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:28:59.123640 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:28:59.123649 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:28:59.123658 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:28:59.123666 | orchestrator | 2026-03-30 00:28:59.123675 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-30 00:28:59.123684 | orchestrator | Monday 30 March 2026 00:28:47 +0000 (0:00:00.307) 0:03:34.490 ********** 2026-03-30 00:28:59.123692 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:59.123702 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:59.123710 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:59.123719 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:59.123727 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:59.123736 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:59.123744 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:59.123752 | orchestrator | 2026-03-30 00:28:59.123762 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-30 00:28:59.123772 | orchestrator | Monday 30 March 2026 00:28:53 +0000 (0:00:05.764) 0:03:40.255 ********** 2026-03-30 00:28:59.123782 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-30 00:28:59.123792 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-30 00:28:59.123801 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:28:59.123811 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-30 00:28:59.123821 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:28:59.123830 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:28:59.123840 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-30 00:28:59.123850 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-30 00:28:59.123860 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:28:59.123869 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-30 00:28:59.123883 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:28:59.123897 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:28:59.123911 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-30 00:28:59.124003 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:28:59.124019 | orchestrator | 2026-03-30 00:28:59.124033 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-30 00:28:59.124049 | orchestrator | Monday 30 March 2026 00:28:53 +0000 (0:00:00.285) 0:03:40.540 ********** 2026-03-30 00:28:59.124064 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-30 00:28:59.124077 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-30 00:28:59.124087 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-30 00:28:59.124113 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-30 00:28:59.124123 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-30 00:28:59.124132 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-30 00:28:59.124151 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-30 00:28:59.124159 | orchestrator | 2026-03-30 00:28:59.124168 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-30 00:28:59.124176 | orchestrator | Monday 30 March 2026 00:28:55 +0000 (0:00:01.135) 0:03:41.676 ********** 2026-03-30 00:28:59.124187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:28:59.124198 | orchestrator | 2026-03-30 00:28:59.124207 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-30 00:28:59.124216 | orchestrator | Monday 30 March 2026 00:28:55 +0000 (0:00:00.375) 0:03:42.052 ********** 2026-03-30 00:28:59.124224 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:59.124233 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:59.124241 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:59.124250 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:59.124258 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:59.124267 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:59.124275 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:59.124283 | orchestrator | 2026-03-30 00:28:59.124292 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-30 00:28:59.124300 | orchestrator | Monday 30 March 2026 00:28:56 +0000 (0:00:01.273) 0:03:43.325 ********** 2026-03-30 00:28:59.124309 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:59.124317 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:59.124326 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:59.124334 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:59.124342 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:59.124350 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:59.124376 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:59.124385 | orchestrator | 2026-03-30 00:28:59.124394 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-30 00:28:59.124402 | orchestrator | Monday 30 March 2026 00:28:57 +0000 (0:00:00.625) 0:03:43.951 ********** 2026-03-30 00:28:59.124411 | orchestrator | changed: [testbed-manager] 2026-03-30 00:28:59.124419 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:28:59.124428 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:28:59.124436 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:28:59.124445 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:28:59.124453 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:28:59.124461 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:28:59.124470 | orchestrator | 2026-03-30 00:28:59.124478 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-30 00:28:59.124487 | orchestrator | Monday 30 March 2026 00:28:57 +0000 (0:00:00.641) 0:03:44.593 ********** 2026-03-30 00:28:59.124495 | orchestrator | ok: [testbed-manager] 2026-03-30 00:28:59.124504 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:28:59.124512 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:28:59.124521 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:28:59.124529 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:28:59.124538 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:28:59.124546 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:28:59.124554 | orchestrator | 2026-03-30 00:28:59.124563 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-30 00:28:59.124571 | orchestrator | Monday 30 March 2026 00:28:58 +0000 (0:00:00.606) 0:03:45.199 ********** 2026-03-30 00:28:59.124588 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829131.982181, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:28:59.124605 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829137.523919, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:28:59.124615 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829146.5302029, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:28:59.124643 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829138.4045804, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284581 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829153.9883153, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284712 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829151.738613, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284730 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774829128.051461, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284758 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284803 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284815 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284825 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284864 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284875 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284884 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 00:29:04.284893 | orchestrator | 2026-03-30 00:29:04.284903 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-30 00:29:04.284915 | orchestrator | Monday 30 March 2026 00:28:59 +0000 (0:00:00.943) 0:03:46.143 ********** 2026-03-30 00:29:04.284979 | orchestrator | changed: [testbed-manager] 2026-03-30 00:29:04.284990 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:29:04.284999 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:29:04.285019 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:29:04.285028 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:29:04.285037 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:29:04.285047 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:29:04.285056 | orchestrator | 2026-03-30 00:29:04.285066 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-30 00:29:04.285076 | orchestrator | Monday 30 March 2026 00:29:00 +0000 (0:00:01.066) 0:03:47.210 ********** 2026-03-30 00:29:04.285085 | orchestrator | changed: [testbed-manager] 2026-03-30 00:29:04.285094 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:29:04.285103 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:29:04.285118 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:29:04.285126 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:29:04.285135 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:29:04.285144 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:29:04.285153 | orchestrator | 2026-03-30 00:29:04.285162 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-30 00:29:04.285171 | orchestrator | Monday 30 March 2026 00:29:01 +0000 (0:00:01.093) 0:03:48.303 ********** 2026-03-30 00:29:04.285181 | orchestrator | changed: [testbed-manager] 2026-03-30 00:29:04.285190 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:29:04.285200 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:29:04.285208 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:29:04.285217 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:29:04.285226 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:29:04.285235 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:29:04.285244 | orchestrator | 2026-03-30 00:29:04.285254 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-30 00:29:04.285264 | orchestrator | Monday 30 March 2026 00:29:02 +0000 (0:00:01.252) 0:03:49.556 ********** 2026-03-30 00:29:04.285273 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:29:04.285283 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:29:04.285292 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:29:04.285302 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:29:04.285311 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:29:04.285320 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:29:04.285328 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:29:04.285337 | orchestrator | 2026-03-30 00:29:04.285347 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-30 00:29:04.285356 | orchestrator | Monday 30 March 2026 00:29:03 +0000 (0:00:00.208) 0:03:49.765 ********** 2026-03-30 00:29:04.285365 | orchestrator | ok: [testbed-manager] 2026-03-30 00:29:04.285375 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:29:04.285384 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:29:04.285393 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:29:04.285403 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:29:04.285412 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:29:04.285421 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:29:04.285430 | orchestrator | 2026-03-30 00:29:04.285440 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-30 00:29:04.285449 | orchestrator | Monday 30 March 2026 00:29:03 +0000 (0:00:00.781) 0:03:50.547 ********** 2026-03-30 00:29:04.285461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:29:04.285472 | orchestrator | 2026-03-30 00:29:04.285481 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-30 00:29:04.285505 | orchestrator | Monday 30 March 2026 00:29:04 +0000 (0:00:00.332) 0:03:50.879 ********** 2026-03-30 00:30:23.926299 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.926406 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:23.926423 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:23.926434 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:23.926472 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:23.926483 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:23.926494 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:23.926505 | orchestrator | 2026-03-30 00:30:23.926518 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-30 00:30:23.926530 | orchestrator | Monday 30 March 2026 00:29:13 +0000 (0:00:09.292) 0:04:00.172 ********** 2026-03-30 00:30:23.926541 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.926552 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.926563 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.926573 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.926584 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.926594 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.926605 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.926616 | orchestrator | 2026-03-30 00:30:23.926627 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-30 00:30:23.926646 | orchestrator | Monday 30 March 2026 00:29:14 +0000 (0:00:01.342) 0:04:01.515 ********** 2026-03-30 00:30:23.926663 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.926680 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.926696 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.926713 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.926731 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.926747 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.926765 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.926782 | orchestrator | 2026-03-30 00:30:23.926801 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-30 00:30:23.926819 | orchestrator | Monday 30 March 2026 00:29:15 +0000 (0:00:00.981) 0:04:02.496 ********** 2026-03-30 00:30:23.926838 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.926857 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.926869 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.926881 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.926931 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.926944 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.926956 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.926968 | orchestrator | 2026-03-30 00:30:23.926980 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-30 00:30:23.926994 | orchestrator | Monday 30 March 2026 00:29:16 +0000 (0:00:00.297) 0:04:02.793 ********** 2026-03-30 00:30:23.927006 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.927019 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.927031 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.927043 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.927055 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.927067 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.927079 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.927091 | orchestrator | 2026-03-30 00:30:23.927104 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-30 00:30:23.927116 | orchestrator | Monday 30 March 2026 00:29:16 +0000 (0:00:00.294) 0:04:03.088 ********** 2026-03-30 00:30:23.927128 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.927140 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.927152 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.927162 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.927173 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.927184 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.927194 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.927205 | orchestrator | 2026-03-30 00:30:23.927216 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-30 00:30:23.927227 | orchestrator | Monday 30 March 2026 00:29:16 +0000 (0:00:00.274) 0:04:03.362 ********** 2026-03-30 00:30:23.927238 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.927248 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.927259 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.927301 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.927312 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.927323 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.927333 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.927344 | orchestrator | 2026-03-30 00:30:23.927355 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-30 00:30:23.927366 | orchestrator | Monday 30 March 2026 00:29:21 +0000 (0:00:04.782) 0:04:08.145 ********** 2026-03-30 00:30:23.927379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:30:23.927393 | orchestrator | 2026-03-30 00:30:23.927404 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-30 00:30:23.927415 | orchestrator | Monday 30 March 2026 00:29:21 +0000 (0:00:00.361) 0:04:08.507 ********** 2026-03-30 00:30:23.927426 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927436 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-30 00:30:23.927447 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927458 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:23.927469 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-30 00:30:23.927480 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927490 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-30 00:30:23.927501 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:23.927512 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927523 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-30 00:30:23.927533 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:23.927544 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927555 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-30 00:30:23.927565 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:23.927576 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927587 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-30 00:30:23.927617 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:23.927629 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:23.927639 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-30 00:30:23.927650 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-30 00:30:23.927661 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:23.927672 | orchestrator | 2026-03-30 00:30:23.927683 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-30 00:30:23.927694 | orchestrator | Monday 30 March 2026 00:29:22 +0000 (0:00:00.335) 0:04:08.843 ********** 2026-03-30 00:30:23.927706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:30:23.927717 | orchestrator | 2026-03-30 00:30:23.927740 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-30 00:30:23.927752 | orchestrator | Monday 30 March 2026 00:29:22 +0000 (0:00:00.510) 0:04:09.353 ********** 2026-03-30 00:30:23.927763 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-30 00:30:23.927774 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-30 00:30:23.927785 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:23.927796 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-30 00:30:23.927824 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:23.927836 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:23.927847 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-30 00:30:23.927865 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-30 00:30:23.927876 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:23.927918 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:23.927930 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-30 00:30:23.927951 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:23.927963 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-30 00:30:23.927982 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:23.928005 | orchestrator | 2026-03-30 00:30:23.928034 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-30 00:30:23.928051 | orchestrator | Monday 30 March 2026 00:29:23 +0000 (0:00:00.311) 0:04:09.664 ********** 2026-03-30 00:30:23.928068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:30:23.928088 | orchestrator | 2026-03-30 00:30:23.928108 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-30 00:30:23.928127 | orchestrator | Monday 30 March 2026 00:29:23 +0000 (0:00:00.376) 0:04:10.041 ********** 2026-03-30 00:30:23.928155 | orchestrator | changed: [testbed-manager] 2026-03-30 00:30:23.928167 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:23.928177 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:23.928188 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:23.928198 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:23.928209 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:23.928220 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:23.928230 | orchestrator | 2026-03-30 00:30:23.928241 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-30 00:30:23.928252 | orchestrator | Monday 30 March 2026 00:29:58 +0000 (0:00:34.768) 0:04:44.809 ********** 2026-03-30 00:30:23.928263 | orchestrator | changed: [testbed-manager] 2026-03-30 00:30:23.928273 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:23.928284 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:23.928294 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:23.928305 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:23.928315 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:23.928326 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:23.928336 | orchestrator | 2026-03-30 00:30:23.928347 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-30 00:30:23.928357 | orchestrator | Monday 30 March 2026 00:30:07 +0000 (0:00:08.837) 0:04:53.647 ********** 2026-03-30 00:30:23.928368 | orchestrator | changed: [testbed-manager] 2026-03-30 00:30:23.928379 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:23.928389 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:23.928400 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:23.928410 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:23.928421 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:23.928431 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:23.928442 | orchestrator | 2026-03-30 00:30:23.928453 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-30 00:30:23.928463 | orchestrator | Monday 30 March 2026 00:30:15 +0000 (0:00:08.536) 0:05:02.184 ********** 2026-03-30 00:30:23.928474 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:23.928484 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:23.928495 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:23.928506 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:23.928516 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:23.928527 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:23.928537 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:23.928548 | orchestrator | 2026-03-30 00:30:23.928558 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-30 00:30:23.928579 | orchestrator | Monday 30 March 2026 00:30:17 +0000 (0:00:01.909) 0:05:04.094 ********** 2026-03-30 00:30:23.928589 | orchestrator | changed: [testbed-manager] 2026-03-30 00:30:23.928600 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:23.928611 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:23.928621 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:23.928632 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:23.928643 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:23.928654 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:23.928669 | orchestrator | 2026-03-30 00:30:23.928706 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-30 00:30:34.917601 | orchestrator | Monday 30 March 2026 00:30:23 +0000 (0:00:06.425) 0:05:10.519 ********** 2026-03-30 00:30:34.917705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:30:34.917722 | orchestrator | 2026-03-30 00:30:34.917734 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-30 00:30:34.917745 | orchestrator | Monday 30 March 2026 00:30:24 +0000 (0:00:00.398) 0:05:10.917 ********** 2026-03-30 00:30:34.917756 | orchestrator | changed: [testbed-manager] 2026-03-30 00:30:34.917767 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:34.917777 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:34.917786 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:34.917796 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:34.917806 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:34.917816 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:34.917833 | orchestrator | 2026-03-30 00:30:34.917849 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-30 00:30:34.917865 | orchestrator | Monday 30 March 2026 00:30:25 +0000 (0:00:00.707) 0:05:11.624 ********** 2026-03-30 00:30:34.917962 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:34.917985 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:34.918003 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:34.918078 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:34.918091 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:34.918100 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:34.918110 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:34.918120 | orchestrator | 2026-03-30 00:30:34.918130 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-30 00:30:34.918139 | orchestrator | Monday 30 March 2026 00:30:26 +0000 (0:00:01.760) 0:05:13.385 ********** 2026-03-30 00:30:34.918149 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:30:34.918159 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:30:34.918169 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:30:34.918179 | orchestrator | changed: [testbed-manager] 2026-03-30 00:30:34.918189 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:30:34.918198 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:30:34.918208 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:30:34.918217 | orchestrator | 2026-03-30 00:30:34.918227 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-30 00:30:34.918237 | orchestrator | Monday 30 March 2026 00:30:27 +0000 (0:00:00.781) 0:05:14.167 ********** 2026-03-30 00:30:34.918247 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:34.918256 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:34.918266 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:34.918282 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:34.918303 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:34.918327 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:34.918341 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:34.918356 | orchestrator | 2026-03-30 00:30:34.918370 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-30 00:30:34.918406 | orchestrator | Monday 30 March 2026 00:30:27 +0000 (0:00:00.260) 0:05:14.428 ********** 2026-03-30 00:30:34.918452 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:34.918469 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:34.918484 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:34.918501 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:34.918517 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:34.918534 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:34.918550 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:34.918566 | orchestrator | 2026-03-30 00:30:34.918583 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-30 00:30:34.918599 | orchestrator | Monday 30 March 2026 00:30:28 +0000 (0:00:00.429) 0:05:14.858 ********** 2026-03-30 00:30:34.918617 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:34.918630 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:34.918639 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:34.918649 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:34.918658 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:34.918668 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:34.918677 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:34.918686 | orchestrator | 2026-03-30 00:30:34.918696 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-30 00:30:34.918706 | orchestrator | Monday 30 March 2026 00:30:28 +0000 (0:00:00.425) 0:05:15.283 ********** 2026-03-30 00:30:34.918715 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:34.918725 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:34.918735 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:34.918744 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:34.918754 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:34.918763 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:34.918772 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:34.918782 | orchestrator | 2026-03-30 00:30:34.918791 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-30 00:30:34.918802 | orchestrator | Monday 30 March 2026 00:30:28 +0000 (0:00:00.269) 0:05:15.553 ********** 2026-03-30 00:30:34.918811 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:34.918821 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:34.918830 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:34.918840 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:34.918849 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:34.918859 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:34.918868 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:34.918904 | orchestrator | 2026-03-30 00:30:34.918914 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-30 00:30:34.918924 | orchestrator | Monday 30 March 2026 00:30:29 +0000 (0:00:00.330) 0:05:15.884 ********** 2026-03-30 00:30:34.918934 | orchestrator | ok: [testbed-manager] =>  2026-03-30 00:30:34.918943 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.918953 | orchestrator | ok: [testbed-node-0] =>  2026-03-30 00:30:34.918963 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.918972 | orchestrator | ok: [testbed-node-1] =>  2026-03-30 00:30:34.918982 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.918991 | orchestrator | ok: [testbed-node-2] =>  2026-03-30 00:30:34.919001 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.919031 | orchestrator | ok: [testbed-node-3] =>  2026-03-30 00:30:34.919041 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.919051 | orchestrator | ok: [testbed-node-4] =>  2026-03-30 00:30:34.919060 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.919070 | orchestrator | ok: [testbed-node-5] =>  2026-03-30 00:30:34.919079 | orchestrator |  docker_version: 5:27.5.1 2026-03-30 00:30:34.919089 | orchestrator | 2026-03-30 00:30:34.919098 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-30 00:30:34.919108 | orchestrator | Monday 30 March 2026 00:30:29 +0000 (0:00:00.273) 0:05:16.158 ********** 2026-03-30 00:30:34.919117 | orchestrator | ok: [testbed-manager] =>  2026-03-30 00:30:34.919136 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919145 | orchestrator | ok: [testbed-node-0] =>  2026-03-30 00:30:34.919155 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919164 | orchestrator | ok: [testbed-node-1] =>  2026-03-30 00:30:34.919173 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919183 | orchestrator | ok: [testbed-node-2] =>  2026-03-30 00:30:34.919192 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919202 | orchestrator | ok: [testbed-node-3] =>  2026-03-30 00:30:34.919211 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919220 | orchestrator | ok: [testbed-node-4] =>  2026-03-30 00:30:34.919230 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919239 | orchestrator | ok: [testbed-node-5] =>  2026-03-30 00:30:34.919249 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-30 00:30:34.919258 | orchestrator | 2026-03-30 00:30:34.919268 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-30 00:30:34.919277 | orchestrator | Monday 30 March 2026 00:30:29 +0000 (0:00:00.311) 0:05:16.470 ********** 2026-03-30 00:30:34.919287 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:34.919296 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:34.919305 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:34.919315 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:34.919324 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:34.919334 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:34.919343 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:34.919352 | orchestrator | 2026-03-30 00:30:34.919362 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-30 00:30:34.919371 | orchestrator | Monday 30 March 2026 00:30:30 +0000 (0:00:00.269) 0:05:16.739 ********** 2026-03-30 00:30:34.919381 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:34.919390 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:34.919400 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:34.919409 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:30:34.919418 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:30:34.919428 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:30:34.919437 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:30:34.919446 | orchestrator | 2026-03-30 00:30:34.919456 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-30 00:30:34.919466 | orchestrator | Monday 30 March 2026 00:30:30 +0000 (0:00:00.252) 0:05:16.991 ********** 2026-03-30 00:30:34.919484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:30:34.919497 | orchestrator | 2026-03-30 00:30:34.919507 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-30 00:30:34.919516 | orchestrator | Monday 30 March 2026 00:30:30 +0000 (0:00:00.375) 0:05:17.367 ********** 2026-03-30 00:30:34.919526 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:34.919536 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:34.919545 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:34.919555 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:34.919564 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:34.919573 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:34.919583 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:34.919592 | orchestrator | 2026-03-30 00:30:34.919602 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-30 00:30:34.919611 | orchestrator | Monday 30 March 2026 00:30:31 +0000 (0:00:00.828) 0:05:18.195 ********** 2026-03-30 00:30:34.919621 | orchestrator | ok: [testbed-manager] 2026-03-30 00:30:34.919630 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:30:34.919639 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:30:34.919649 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:30:34.919658 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:30:34.919675 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:30:34.919684 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:30:34.919694 | orchestrator | 2026-03-30 00:30:34.919703 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-30 00:30:34.919714 | orchestrator | Monday 30 March 2026 00:30:34 +0000 (0:00:03.019) 0:05:21.214 ********** 2026-03-30 00:30:34.919723 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-30 00:30:34.919733 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-30 00:30:34.919743 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-30 00:30:34.919752 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-30 00:30:34.919762 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-30 00:30:34.919772 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:30:34.919781 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-30 00:30:34.919790 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-30 00:30:34.919800 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-30 00:30:34.919810 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-30 00:30:34.919819 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:30:34.919828 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-30 00:30:34.919838 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-30 00:30:34.919847 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-30 00:30:34.919857 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:30:34.919867 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-30 00:30:34.919900 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:37.492709 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-30 00:31:37.492809 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-30 00:31:37.492885 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-30 00:31:37.492895 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-30 00:31:37.492902 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-30 00:31:37.492911 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:37.492924 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:37.492934 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-30 00:31:37.492950 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-30 00:31:37.492962 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-30 00:31:37.492972 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:37.492983 | orchestrator | 2026-03-30 00:31:37.492994 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-30 00:31:37.493007 | orchestrator | Monday 30 March 2026 00:30:35 +0000 (0:00:00.468) 0:05:21.682 ********** 2026-03-30 00:31:37.493019 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493030 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493039 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493046 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493052 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493058 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493064 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493071 | orchestrator | 2026-03-30 00:31:37.493077 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-30 00:31:37.493084 | orchestrator | Monday 30 March 2026 00:30:41 +0000 (0:00:06.868) 0:05:28.551 ********** 2026-03-30 00:31:37.493090 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493096 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493103 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493109 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493115 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493121 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493150 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493156 | orchestrator | 2026-03-30 00:31:37.493163 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-30 00:31:37.493169 | orchestrator | Monday 30 March 2026 00:30:43 +0000 (0:00:01.123) 0:05:29.674 ********** 2026-03-30 00:31:37.493175 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493181 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493187 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493194 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493200 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493206 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493212 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493218 | orchestrator | 2026-03-30 00:31:37.493224 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-30 00:31:37.493231 | orchestrator | Monday 30 March 2026 00:30:51 +0000 (0:00:08.384) 0:05:38.059 ********** 2026-03-30 00:31:37.493237 | orchestrator | changed: [testbed-manager] 2026-03-30 00:31:37.493243 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493263 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493270 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493277 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493284 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493291 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493298 | orchestrator | 2026-03-30 00:31:37.493305 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-30 00:31:37.493312 | orchestrator | Monday 30 March 2026 00:30:54 +0000 (0:00:03.396) 0:05:41.455 ********** 2026-03-30 00:31:37.493319 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493327 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493334 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493341 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493347 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493355 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493362 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493369 | orchestrator | 2026-03-30 00:31:37.493376 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-30 00:31:37.493383 | orchestrator | Monday 30 March 2026 00:30:56 +0000 (0:00:01.269) 0:05:42.725 ********** 2026-03-30 00:31:37.493390 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493397 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493404 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493411 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493418 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493425 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493432 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493439 | orchestrator | 2026-03-30 00:31:37.493446 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-30 00:31:37.493453 | orchestrator | Monday 30 March 2026 00:30:57 +0000 (0:00:01.312) 0:05:44.037 ********** 2026-03-30 00:31:37.493460 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:37.493467 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:37.493475 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:37.493482 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:37.493489 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:37.493497 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:37.493504 | orchestrator | changed: [testbed-manager] 2026-03-30 00:31:37.493511 | orchestrator | 2026-03-30 00:31:37.493517 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-30 00:31:37.493525 | orchestrator | Monday 30 March 2026 00:30:58 +0000 (0:00:00.595) 0:05:44.632 ********** 2026-03-30 00:31:37.493531 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493539 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493546 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493558 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493565 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493572 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493579 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493587 | orchestrator | 2026-03-30 00:31:37.493594 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-30 00:31:37.493617 | orchestrator | Monday 30 March 2026 00:31:08 +0000 (0:00:10.684) 0:05:55.316 ********** 2026-03-30 00:31:37.493624 | orchestrator | changed: [testbed-manager] 2026-03-30 00:31:37.493631 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493639 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493646 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493653 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493664 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493678 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493690 | orchestrator | 2026-03-30 00:31:37.493699 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-30 00:31:37.493709 | orchestrator | Monday 30 March 2026 00:31:09 +0000 (0:00:01.091) 0:05:56.408 ********** 2026-03-30 00:31:37.493718 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493728 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493737 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493746 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493755 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493765 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493776 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493787 | orchestrator | 2026-03-30 00:31:37.493797 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-30 00:31:37.493807 | orchestrator | Monday 30 March 2026 00:31:19 +0000 (0:00:09.429) 0:06:05.838 ********** 2026-03-30 00:31:37.493842 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.493849 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.493855 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.493861 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.493868 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.493874 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.493880 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.493886 | orchestrator | 2026-03-30 00:31:37.493892 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-30 00:31:37.493899 | orchestrator | Monday 30 March 2026 00:31:30 +0000 (0:00:11.381) 0:06:17.220 ********** 2026-03-30 00:31:37.493909 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-30 00:31:37.493925 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-30 00:31:37.493936 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-30 00:31:37.493946 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-30 00:31:37.493956 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-30 00:31:37.493967 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-30 00:31:37.493978 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-30 00:31:37.493988 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-30 00:31:37.493998 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-30 00:31:37.494009 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-30 00:31:37.494062 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-30 00:31:37.494069 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-30 00:31:37.494075 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-30 00:31:37.494081 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-30 00:31:37.494088 | orchestrator | 2026-03-30 00:31:37.494094 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-30 00:31:37.494100 | orchestrator | Monday 30 March 2026 00:31:31 +0000 (0:00:01.279) 0:06:18.500 ********** 2026-03-30 00:31:37.494118 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:37.494124 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:37.494131 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:37.494137 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:37.494143 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:37.494149 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:37.494155 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:37.494162 | orchestrator | 2026-03-30 00:31:37.494168 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-30 00:31:37.494174 | orchestrator | Monday 30 March 2026 00:31:32 +0000 (0:00:00.690) 0:06:19.190 ********** 2026-03-30 00:31:37.494180 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:37.494187 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:37.494193 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:37.494199 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:37.494205 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:37.494211 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:37.494217 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:37.494223 | orchestrator | 2026-03-30 00:31:37.494230 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-30 00:31:37.494238 | orchestrator | Monday 30 March 2026 00:31:36 +0000 (0:00:04.159) 0:06:23.349 ********** 2026-03-30 00:31:37.494244 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:37.494250 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:37.494256 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:37.494262 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:37.494269 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:37.494275 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:37.494281 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:37.494287 | orchestrator | 2026-03-30 00:31:37.494326 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-30 00:31:37.494334 | orchestrator | Monday 30 March 2026 00:31:37 +0000 (0:00:00.473) 0:06:23.823 ********** 2026-03-30 00:31:37.494340 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-30 00:31:37.494346 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-30 00:31:37.494353 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-30 00:31:37.494359 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-30 00:31:37.494365 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:37.494371 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-30 00:31:37.494377 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-30 00:31:37.494384 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:37.494398 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-30 00:31:56.158366 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-30 00:31:56.158520 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:56.158542 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-30 00:31:56.158554 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-30 00:31:56.158566 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:56.158577 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-30 00:31:56.158588 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-30 00:31:56.158599 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:56.158610 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:56.158622 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-30 00:31:56.158632 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-30 00:31:56.158643 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:56.158654 | orchestrator | 2026-03-30 00:31:56.158667 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-30 00:31:56.158702 | orchestrator | Monday 30 March 2026 00:31:37 +0000 (0:00:00.532) 0:06:24.355 ********** 2026-03-30 00:31:56.158713 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:56.158724 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:56.158734 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:56.158745 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:56.158755 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:56.158766 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:56.158840 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:56.158859 | orchestrator | 2026-03-30 00:31:56.158887 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-30 00:31:56.158912 | orchestrator | Monday 30 March 2026 00:31:38 +0000 (0:00:00.490) 0:06:24.846 ********** 2026-03-30 00:31:56.158930 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:56.158947 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:56.158965 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:56.158983 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:56.159001 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:56.159019 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:56.159038 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:56.159056 | orchestrator | 2026-03-30 00:31:56.159076 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-30 00:31:56.159094 | orchestrator | Monday 30 March 2026 00:31:38 +0000 (0:00:00.675) 0:06:25.521 ********** 2026-03-30 00:31:56.159111 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:56.159129 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:31:56.159145 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:31:56.159165 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:31:56.159184 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:31:56.159204 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:31:56.159224 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:31:56.159242 | orchestrator | 2026-03-30 00:31:56.159261 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-30 00:31:56.159302 | orchestrator | Monday 30 March 2026 00:31:39 +0000 (0:00:00.497) 0:06:26.019 ********** 2026-03-30 00:31:56.159323 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.159342 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:31:56.159358 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:31:56.159374 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:31:56.159391 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:31:56.159410 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:31:56.159429 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:31:56.159448 | orchestrator | 2026-03-30 00:31:56.159467 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-30 00:31:56.159485 | orchestrator | Monday 30 March 2026 00:31:41 +0000 (0:00:01.925) 0:06:27.945 ********** 2026-03-30 00:31:56.159501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:31:56.159515 | orchestrator | 2026-03-30 00:31:56.159526 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-30 00:31:56.159536 | orchestrator | Monday 30 March 2026 00:31:42 +0000 (0:00:00.850) 0:06:28.796 ********** 2026-03-30 00:31:56.159547 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.159558 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:56.159571 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:56.159590 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:56.159607 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:56.159625 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:56.159644 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:56.159662 | orchestrator | 2026-03-30 00:31:56.159681 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-30 00:31:56.159716 | orchestrator | Monday 30 March 2026 00:31:43 +0000 (0:00:01.050) 0:06:29.847 ********** 2026-03-30 00:31:56.159736 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.159753 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:56.159771 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:56.159824 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:56.159842 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:56.159860 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:56.159876 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:56.159893 | orchestrator | 2026-03-30 00:31:56.159911 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-30 00:31:56.159930 | orchestrator | Monday 30 March 2026 00:31:44 +0000 (0:00:00.858) 0:06:30.706 ********** 2026-03-30 00:31:56.159948 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.159967 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:56.159985 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:56.160004 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:56.160022 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:56.160036 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:56.160047 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:56.160058 | orchestrator | 2026-03-30 00:31:56.160069 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-30 00:31:56.160103 | orchestrator | Monday 30 March 2026 00:31:45 +0000 (0:00:01.336) 0:06:32.042 ********** 2026-03-30 00:31:56.160115 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:31:56.160125 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:31:56.160136 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:31:56.160147 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:31:56.160157 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:31:56.160168 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:31:56.160178 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:31:56.160189 | orchestrator | 2026-03-30 00:31:56.160200 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-30 00:31:56.160211 | orchestrator | Monday 30 March 2026 00:31:46 +0000 (0:00:01.384) 0:06:33.427 ********** 2026-03-30 00:31:56.160222 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.160232 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:56.160243 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:56.160253 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:56.160264 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:56.160274 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:56.160285 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:56.160296 | orchestrator | 2026-03-30 00:31:56.160306 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-30 00:31:56.160317 | orchestrator | Monday 30 March 2026 00:31:48 +0000 (0:00:01.436) 0:06:34.864 ********** 2026-03-30 00:31:56.160328 | orchestrator | changed: [testbed-manager] 2026-03-30 00:31:56.160338 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:31:56.160349 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:31:56.160359 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:31:56.160370 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:31:56.160380 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:31:56.160391 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:31:56.160401 | orchestrator | 2026-03-30 00:31:56.160412 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-30 00:31:56.160423 | orchestrator | Monday 30 March 2026 00:31:49 +0000 (0:00:01.395) 0:06:36.259 ********** 2026-03-30 00:31:56.160434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:31:56.160445 | orchestrator | 2026-03-30 00:31:56.160456 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-30 00:31:56.160467 | orchestrator | Monday 30 March 2026 00:31:50 +0000 (0:00:00.839) 0:06:37.098 ********** 2026-03-30 00:31:56.160493 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.160504 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:31:56.160515 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:31:56.160526 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:31:56.160536 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:31:56.160547 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:31:56.160557 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:31:56.160568 | orchestrator | 2026-03-30 00:31:56.160579 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-30 00:31:56.160590 | orchestrator | Monday 30 March 2026 00:31:51 +0000 (0:00:01.340) 0:06:38.439 ********** 2026-03-30 00:31:56.160601 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.160611 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:31:56.160622 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:31:56.160632 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:31:56.160643 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:31:56.160654 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:31:56.160664 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:31:56.160680 | orchestrator | 2026-03-30 00:31:56.160699 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-30 00:31:56.160717 | orchestrator | Monday 30 March 2026 00:31:53 +0000 (0:00:01.183) 0:06:39.623 ********** 2026-03-30 00:31:56.160735 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.160751 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:31:56.160766 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:31:56.160845 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:31:56.160866 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:31:56.160884 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:31:56.160900 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:31:56.160916 | orchestrator | 2026-03-30 00:31:56.160928 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-30 00:31:56.160939 | orchestrator | Monday 30 March 2026 00:31:54 +0000 (0:00:01.053) 0:06:40.676 ********** 2026-03-30 00:31:56.160949 | orchestrator | ok: [testbed-manager] 2026-03-30 00:31:56.160960 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:31:56.160971 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:31:56.160981 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:31:56.160991 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:31:56.161002 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:31:56.161012 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:31:56.161023 | orchestrator | 2026-03-30 00:31:56.161033 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-30 00:31:56.161044 | orchestrator | Monday 30 March 2026 00:31:55 +0000 (0:00:01.097) 0:06:41.774 ********** 2026-03-30 00:31:56.161055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:31:56.161066 | orchestrator | 2026-03-30 00:31:56.161077 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:31:56.161088 | orchestrator | Monday 30 March 2026 00:31:55 +0000 (0:00:00.756) 0:06:42.531 ********** 2026-03-30 00:31:56.161099 | orchestrator | 2026-03-30 00:31:56.161109 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:31:56.161120 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.139) 0:06:42.670 ********** 2026-03-30 00:31:56.161131 | orchestrator | 2026-03-30 00:31:56.161141 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:31:56.161152 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.038) 0:06:42.708 ********** 2026-03-30 00:31:56.161163 | orchestrator | 2026-03-30 00:31:56.161173 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:31:56.161194 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.038) 0:06:42.747 ********** 2026-03-30 00:32:21.530707 | orchestrator | 2026-03-30 00:32:21.530833 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:32:21.530868 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.045) 0:06:42.793 ********** 2026-03-30 00:32:21.530877 | orchestrator | 2026-03-30 00:32:21.530885 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:32:21.530893 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.039) 0:06:42.833 ********** 2026-03-30 00:32:21.530901 | orchestrator | 2026-03-30 00:32:21.530909 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-30 00:32:21.530917 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.038) 0:06:42.871 ********** 2026-03-30 00:32:21.530925 | orchestrator | 2026-03-30 00:32:21.530932 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-30 00:32:21.530940 | orchestrator | Monday 30 March 2026 00:31:56 +0000 (0:00:00.046) 0:06:42.918 ********** 2026-03-30 00:32:21.530948 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:21.530957 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:21.530964 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:21.530972 | orchestrator | 2026-03-30 00:32:21.530980 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-30 00:32:21.530988 | orchestrator | Monday 30 March 2026 00:31:57 +0000 (0:00:01.271) 0:06:44.189 ********** 2026-03-30 00:32:21.530996 | orchestrator | changed: [testbed-manager] 2026-03-30 00:32:21.531005 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:21.531013 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:21.531020 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:21.531028 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:21.531045 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:21.531053 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:21.531061 | orchestrator | 2026-03-30 00:32:21.531069 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-30 00:32:21.531077 | orchestrator | Monday 30 March 2026 00:31:59 +0000 (0:00:01.463) 0:06:45.653 ********** 2026-03-30 00:32:21.531085 | orchestrator | changed: [testbed-manager] 2026-03-30 00:32:21.531093 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:21.531100 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:21.531108 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:21.531116 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:21.531123 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:21.531131 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:21.531139 | orchestrator | 2026-03-30 00:32:21.531146 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-30 00:32:21.531154 | orchestrator | Monday 30 March 2026 00:32:00 +0000 (0:00:01.351) 0:06:47.005 ********** 2026-03-30 00:32:21.531162 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:21.531170 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:21.531177 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:21.531185 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:21.531193 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:21.531201 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:21.531208 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:21.531216 | orchestrator | 2026-03-30 00:32:21.531237 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-30 00:32:21.531246 | orchestrator | Monday 30 March 2026 00:32:02 +0000 (0:00:02.229) 0:06:49.235 ********** 2026-03-30 00:32:21.531255 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:21.531263 | orchestrator | 2026-03-30 00:32:21.531272 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-30 00:32:21.531281 | orchestrator | Monday 30 March 2026 00:32:02 +0000 (0:00:00.088) 0:06:49.323 ********** 2026-03-30 00:32:21.531290 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:21.531299 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:21.531308 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:21.531316 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:21.531332 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:21.531341 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:21.531350 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:21.531359 | orchestrator | 2026-03-30 00:32:21.531368 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-30 00:32:21.531378 | orchestrator | Monday 30 March 2026 00:32:03 +0000 (0:00:01.072) 0:06:50.396 ********** 2026-03-30 00:32:21.531387 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:21.531395 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:21.531403 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:21.531412 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:21.531420 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:21.531429 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:21.531437 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:21.531446 | orchestrator | 2026-03-30 00:32:21.531455 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-30 00:32:21.531463 | orchestrator | Monday 30 March 2026 00:32:04 +0000 (0:00:00.442) 0:06:50.838 ********** 2026-03-30 00:32:21.531473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:32:21.531484 | orchestrator | 2026-03-30 00:32:21.531493 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-30 00:32:21.531502 | orchestrator | Monday 30 March 2026 00:32:04 +0000 (0:00:00.755) 0:06:51.593 ********** 2026-03-30 00:32:21.531510 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:21.531519 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:21.531528 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:21.531537 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:21.531545 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:21.531554 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:21.531563 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:21.531571 | orchestrator | 2026-03-30 00:32:21.531580 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-30 00:32:21.531589 | orchestrator | Monday 30 March 2026 00:32:05 +0000 (0:00:00.908) 0:06:52.502 ********** 2026-03-30 00:32:21.531598 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-30 00:32:21.531621 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-30 00:32:21.531630 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-30 00:32:21.531638 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-30 00:32:21.531646 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-30 00:32:21.531654 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-30 00:32:21.531661 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-30 00:32:21.531669 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-30 00:32:21.531677 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-30 00:32:21.531685 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-30 00:32:21.531693 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-30 00:32:21.531700 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-30 00:32:21.531708 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-30 00:32:21.531767 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-30 00:32:21.531782 | orchestrator | 2026-03-30 00:32:21.531790 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-30 00:32:21.531798 | orchestrator | Monday 30 March 2026 00:32:08 +0000 (0:00:02.443) 0:06:54.945 ********** 2026-03-30 00:32:21.531806 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:21.531813 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:21.531821 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:21.531836 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:21.531844 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:21.531852 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:21.531860 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:21.531867 | orchestrator | 2026-03-30 00:32:21.531875 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-30 00:32:21.531883 | orchestrator | Monday 30 March 2026 00:32:08 +0000 (0:00:00.427) 0:06:55.373 ********** 2026-03-30 00:32:21.531893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:32:21.531902 | orchestrator | 2026-03-30 00:32:21.531911 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-30 00:32:21.531918 | orchestrator | Monday 30 March 2026 00:32:09 +0000 (0:00:00.865) 0:06:56.239 ********** 2026-03-30 00:32:21.531926 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:21.531934 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:21.531942 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:21.531950 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:21.531957 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:21.531965 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:21.531973 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:21.531980 | orchestrator | 2026-03-30 00:32:21.531993 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-30 00:32:21.532001 | orchestrator | Monday 30 March 2026 00:32:10 +0000 (0:00:00.842) 0:06:57.082 ********** 2026-03-30 00:32:21.532009 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:21.532017 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:21.532025 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:21.532032 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:21.532040 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:21.532047 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:21.532055 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:21.532063 | orchestrator | 2026-03-30 00:32:21.532071 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-30 00:32:21.532079 | orchestrator | Monday 30 March 2026 00:32:11 +0000 (0:00:00.797) 0:06:57.880 ********** 2026-03-30 00:32:21.532087 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:21.532094 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:21.532102 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:21.532110 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:21.532118 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:21.532126 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:21.532133 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:21.532141 | orchestrator | 2026-03-30 00:32:21.532149 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-30 00:32:21.532157 | orchestrator | Monday 30 March 2026 00:32:11 +0000 (0:00:00.492) 0:06:58.372 ********** 2026-03-30 00:32:21.532165 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:21.532173 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:21.532180 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:21.532188 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:21.532196 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:21.532204 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:21.532211 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:21.532219 | orchestrator | 2026-03-30 00:32:21.532227 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-30 00:32:21.532235 | orchestrator | Monday 30 March 2026 00:32:13 +0000 (0:00:01.573) 0:06:59.945 ********** 2026-03-30 00:32:21.532243 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:21.532250 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:21.532258 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:21.532266 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:21.532274 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:21.532288 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:21.532296 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:21.532304 | orchestrator | 2026-03-30 00:32:21.532312 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-30 00:32:21.532319 | orchestrator | Monday 30 March 2026 00:32:13 +0000 (0:00:00.619) 0:07:00.565 ********** 2026-03-30 00:32:21.532327 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:21.532335 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:21.532343 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:21.532350 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:21.532358 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:21.532366 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:21.532380 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:53.587578 | orchestrator | 2026-03-30 00:32:53.587742 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-30 00:32:53.587764 | orchestrator | Monday 30 March 2026 00:32:21 +0000 (0:00:07.630) 0:07:08.196 ********** 2026-03-30 00:32:53.587777 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.587789 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:53.587800 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:53.587811 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:53.587822 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:53.587833 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:53.587844 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:53.587855 | orchestrator | 2026-03-30 00:32:53.587879 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-30 00:32:53.587890 | orchestrator | Monday 30 March 2026 00:32:22 +0000 (0:00:01.323) 0:07:09.519 ********** 2026-03-30 00:32:53.587912 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.587923 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:53.587934 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:53.587944 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:53.587955 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:53.587966 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:53.587977 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:53.587988 | orchestrator | 2026-03-30 00:32:53.587999 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-30 00:32:53.588010 | orchestrator | Monday 30 March 2026 00:32:24 +0000 (0:00:01.706) 0:07:11.225 ********** 2026-03-30 00:32:53.588020 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.588031 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:53.588042 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:53.588052 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:53.588063 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:53.588074 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:53.588084 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:53.588095 | orchestrator | 2026-03-30 00:32:53.588106 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-30 00:32:53.588120 | orchestrator | Monday 30 March 2026 00:32:26 +0000 (0:00:01.747) 0:07:12.972 ********** 2026-03-30 00:32:53.588132 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.588144 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.588157 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.588170 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.588182 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.588195 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.588207 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.588219 | orchestrator | 2026-03-30 00:32:53.588232 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-30 00:32:53.588244 | orchestrator | Monday 30 March 2026 00:32:27 +0000 (0:00:00.870) 0:07:13.843 ********** 2026-03-30 00:32:53.588257 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:53.588269 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:53.588281 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:53.588320 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:53.588333 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:53.588346 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:53.588359 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:53.588372 | orchestrator | 2026-03-30 00:32:53.588385 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-30 00:32:53.588397 | orchestrator | Monday 30 March 2026 00:32:27 +0000 (0:00:00.745) 0:07:14.589 ********** 2026-03-30 00:32:53.588410 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:53.588422 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:53.588434 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:53.588447 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:53.588460 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:53.588471 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:53.588482 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:53.588493 | orchestrator | 2026-03-30 00:32:53.588503 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-30 00:32:53.588514 | orchestrator | Monday 30 March 2026 00:32:28 +0000 (0:00:00.649) 0:07:15.238 ********** 2026-03-30 00:32:53.588525 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.588536 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.588546 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.588557 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.588568 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.588578 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.588589 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.588599 | orchestrator | 2026-03-30 00:32:53.588610 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-30 00:32:53.588621 | orchestrator | Monday 30 March 2026 00:32:29 +0000 (0:00:00.485) 0:07:15.724 ********** 2026-03-30 00:32:53.588631 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.588642 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.588653 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.588663 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.588753 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.588764 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.588774 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.588785 | orchestrator | 2026-03-30 00:32:53.588796 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-30 00:32:53.588807 | orchestrator | Monday 30 March 2026 00:32:29 +0000 (0:00:00.501) 0:07:16.225 ********** 2026-03-30 00:32:53.588818 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.588828 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.588839 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.588849 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.588860 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.588870 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.588881 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.588891 | orchestrator | 2026-03-30 00:32:53.588902 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-30 00:32:53.588913 | orchestrator | Monday 30 March 2026 00:32:30 +0000 (0:00:00.491) 0:07:16.717 ********** 2026-03-30 00:32:53.588924 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.588935 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.588945 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.588956 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.588966 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.588977 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.589006 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.589017 | orchestrator | 2026-03-30 00:32:53.589046 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-30 00:32:53.589058 | orchestrator | Monday 30 March 2026 00:32:35 +0000 (0:00:05.003) 0:07:21.720 ********** 2026-03-30 00:32:53.589069 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:32:53.589080 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:32:53.589104 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:32:53.589113 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:32:53.589123 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:32:53.589132 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:32:53.589142 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:32:53.589151 | orchestrator | 2026-03-30 00:32:53.589161 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-30 00:32:53.589170 | orchestrator | Monday 30 March 2026 00:32:35 +0000 (0:00:00.681) 0:07:22.401 ********** 2026-03-30 00:32:53.589181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:32:53.589194 | orchestrator | 2026-03-30 00:32:53.589203 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-30 00:32:53.589213 | orchestrator | Monday 30 March 2026 00:32:36 +0000 (0:00:00.783) 0:07:23.185 ********** 2026-03-30 00:32:53.589222 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.589232 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.589241 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.589251 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.589260 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.589270 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.589279 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.589289 | orchestrator | 2026-03-30 00:32:53.589298 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-30 00:32:53.589308 | orchestrator | Monday 30 March 2026 00:32:38 +0000 (0:00:02.282) 0:07:25.467 ********** 2026-03-30 00:32:53.589317 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.589327 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.589336 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.589345 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.589354 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.589364 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.589373 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.589383 | orchestrator | 2026-03-30 00:32:53.589392 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-30 00:32:53.589402 | orchestrator | Monday 30 March 2026 00:32:40 +0000 (0:00:01.333) 0:07:26.801 ********** 2026-03-30 00:32:53.589412 | orchestrator | ok: [testbed-manager] 2026-03-30 00:32:53.589421 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:32:53.589430 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:32:53.589440 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:32:53.589449 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:32:53.589458 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:32:53.589468 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:32:53.589478 | orchestrator | 2026-03-30 00:32:53.589487 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-30 00:32:53.589502 | orchestrator | Monday 30 March 2026 00:32:41 +0000 (0:00:00.923) 0:07:27.725 ********** 2026-03-30 00:32:53.589512 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589524 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589534 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589544 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589554 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589563 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589580 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-30 00:32:53.589590 | orchestrator | 2026-03-30 00:32:53.589599 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-30 00:32:53.589609 | orchestrator | Monday 30 March 2026 00:32:42 +0000 (0:00:01.716) 0:07:29.442 ********** 2026-03-30 00:32:53.589619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:32:53.589629 | orchestrator | 2026-03-30 00:32:53.589639 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-30 00:32:53.589648 | orchestrator | Monday 30 March 2026 00:32:43 +0000 (0:00:00.904) 0:07:30.347 ********** 2026-03-30 00:32:53.589658 | orchestrator | changed: [testbed-manager] 2026-03-30 00:32:53.589689 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:32:53.589699 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:32:53.589709 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:32:53.589718 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:32:53.589728 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:32:53.589738 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:32:53.589747 | orchestrator | 2026-03-30 00:32:53.589763 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-30 00:33:24.174989 | orchestrator | Monday 30 March 2026 00:32:53 +0000 (0:00:09.833) 0:07:40.181 ********** 2026-03-30 00:33:24.175130 | orchestrator | ok: [testbed-manager] 2026-03-30 00:33:24.175160 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:33:24.175181 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:33:24.175202 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:33:24.175223 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:33:24.175244 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:33:24.175264 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:33:24.175286 | orchestrator | 2026-03-30 00:33:24.175306 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-30 00:33:24.175324 | orchestrator | Monday 30 March 2026 00:32:55 +0000 (0:00:01.692) 0:07:41.873 ********** 2026-03-30 00:33:24.175345 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:33:24.175365 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:33:24.175386 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:33:24.175407 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:33:24.175427 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:33:24.175448 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:33:24.175469 | orchestrator | 2026-03-30 00:33:24.175491 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-30 00:33:24.175515 | orchestrator | Monday 30 March 2026 00:32:57 +0000 (0:00:01.748) 0:07:43.621 ********** 2026-03-30 00:33:24.175537 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.175562 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.175579 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.175597 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.175710 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.175730 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.175749 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.175767 | orchestrator | 2026-03-30 00:33:24.175787 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-30 00:33:24.175804 | orchestrator | 2026-03-30 00:33:24.175823 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-30 00:33:24.175841 | orchestrator | Monday 30 March 2026 00:32:58 +0000 (0:00:01.281) 0:07:44.903 ********** 2026-03-30 00:33:24.175860 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:33:24.175878 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:33:24.175933 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:33:24.175953 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:33:24.175973 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:33:24.175990 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:33:24.176007 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:33:24.176024 | orchestrator | 2026-03-30 00:33:24.176042 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-30 00:33:24.176061 | orchestrator | 2026-03-30 00:33:24.176078 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-30 00:33:24.176096 | orchestrator | Monday 30 March 2026 00:32:58 +0000 (0:00:00.490) 0:07:45.393 ********** 2026-03-30 00:33:24.176116 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.176133 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.176151 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.176168 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.176187 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.176227 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.176247 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.176264 | orchestrator | 2026-03-30 00:33:24.176283 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-30 00:33:24.176301 | orchestrator | Monday 30 March 2026 00:33:00 +0000 (0:00:01.324) 0:07:46.718 ********** 2026-03-30 00:33:24.176318 | orchestrator | ok: [testbed-manager] 2026-03-30 00:33:24.176337 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:33:24.176355 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:33:24.176372 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:33:24.176391 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:33:24.176409 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:33:24.176426 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:33:24.176444 | orchestrator | 2026-03-30 00:33:24.176461 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-30 00:33:24.176479 | orchestrator | Monday 30 March 2026 00:33:01 +0000 (0:00:01.687) 0:07:48.405 ********** 2026-03-30 00:33:24.176498 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:33:24.176515 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:33:24.176533 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:33:24.176551 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:33:24.176570 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:33:24.176589 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:33:24.176632 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:33:24.176654 | orchestrator | 2026-03-30 00:33:24.176674 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-30 00:33:24.176692 | orchestrator | Monday 30 March 2026 00:33:02 +0000 (0:00:00.469) 0:07:48.874 ********** 2026-03-30 00:33:24.176711 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:33:24.176732 | orchestrator | 2026-03-30 00:33:24.176751 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-30 00:33:24.176770 | orchestrator | Monday 30 March 2026 00:33:03 +0000 (0:00:00.762) 0:07:49.637 ********** 2026-03-30 00:33:24.176792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:33:24.176814 | orchestrator | 2026-03-30 00:33:24.176833 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-30 00:33:24.176850 | orchestrator | Monday 30 March 2026 00:33:03 +0000 (0:00:00.889) 0:07:50.526 ********** 2026-03-30 00:33:24.176868 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.176887 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.176906 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.176925 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.176964 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.176982 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.177001 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.177021 | orchestrator | 2026-03-30 00:33:24.177070 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-30 00:33:24.177090 | orchestrator | Monday 30 March 2026 00:33:12 +0000 (0:00:09.037) 0:07:59.564 ********** 2026-03-30 00:33:24.177107 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.177125 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.177142 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.177160 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.177178 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.177195 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.177213 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.177230 | orchestrator | 2026-03-30 00:33:24.177247 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-30 00:33:24.177265 | orchestrator | Monday 30 March 2026 00:33:13 +0000 (0:00:00.877) 0:08:00.442 ********** 2026-03-30 00:33:24.177281 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.177298 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.177316 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.177334 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.177351 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.177368 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.177385 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.177402 | orchestrator | 2026-03-30 00:33:24.177419 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-30 00:33:24.177436 | orchestrator | Monday 30 March 2026 00:33:15 +0000 (0:00:01.318) 0:08:01.760 ********** 2026-03-30 00:33:24.177453 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.177471 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.177488 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.177506 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.177523 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.177541 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.177557 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.177576 | orchestrator | 2026-03-30 00:33:24.177593 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-30 00:33:24.177638 | orchestrator | Monday 30 March 2026 00:33:17 +0000 (0:00:01.904) 0:08:03.664 ********** 2026-03-30 00:33:24.177657 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.177675 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.177693 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.177710 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.177727 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.177745 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.177764 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.177781 | orchestrator | 2026-03-30 00:33:24.177798 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-30 00:33:24.177816 | orchestrator | Monday 30 March 2026 00:33:18 +0000 (0:00:01.333) 0:08:04.997 ********** 2026-03-30 00:33:24.177833 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.177852 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.177870 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.177887 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.177906 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.177937 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.177957 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.177975 | orchestrator | 2026-03-30 00:33:24.177992 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-30 00:33:24.178012 | orchestrator | 2026-03-30 00:33:24.178114 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-30 00:33:24.178126 | orchestrator | Monday 30 March 2026 00:33:19 +0000 (0:00:01.113) 0:08:06.111 ********** 2026-03-30 00:33:24.178152 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:33:24.178163 | orchestrator | 2026-03-30 00:33:24.178174 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-30 00:33:24.178185 | orchestrator | Monday 30 March 2026 00:33:20 +0000 (0:00:00.890) 0:08:07.001 ********** 2026-03-30 00:33:24.178195 | orchestrator | ok: [testbed-manager] 2026-03-30 00:33:24.178206 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:33:24.178217 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:33:24.178227 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:33:24.178238 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:33:24.178249 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:33:24.178259 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:33:24.178270 | orchestrator | 2026-03-30 00:33:24.178280 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-30 00:33:24.178291 | orchestrator | Monday 30 March 2026 00:33:21 +0000 (0:00:00.839) 0:08:07.841 ********** 2026-03-30 00:33:24.178301 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:24.178312 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:24.178323 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:24.178334 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:24.178344 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:24.178355 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:24.178365 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:24.178376 | orchestrator | 2026-03-30 00:33:24.178387 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-30 00:33:24.178398 | orchestrator | Monday 30 March 2026 00:33:22 +0000 (0:00:01.257) 0:08:09.099 ********** 2026-03-30 00:33:24.178408 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:33:24.178419 | orchestrator | 2026-03-30 00:33:24.178430 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-30 00:33:24.178441 | orchestrator | Monday 30 March 2026 00:33:23 +0000 (0:00:00.815) 0:08:09.914 ********** 2026-03-30 00:33:24.178452 | orchestrator | ok: [testbed-manager] 2026-03-30 00:33:24.178462 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:33:24.178473 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:33:24.178484 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:33:24.178494 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:33:24.178505 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:33:24.178515 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:33:24.178526 | orchestrator | 2026-03-30 00:33:24.178551 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-30 00:33:25.684708 | orchestrator | Monday 30 March 2026 00:33:24 +0000 (0:00:00.852) 0:08:10.767 ********** 2026-03-30 00:33:25.684800 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:25.684816 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:25.684827 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:25.684838 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:25.684847 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:25.684856 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:25.684866 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:25.684875 | orchestrator | 2026-03-30 00:33:25.684886 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:33:25.684896 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-30 00:33:25.684907 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-30 00:33:25.684918 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-30 00:33:25.684957 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-30 00:33:25.684968 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-30 00:33:25.684978 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-30 00:33:25.684988 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-30 00:33:25.684998 | orchestrator | 2026-03-30 00:33:25.685008 | orchestrator | 2026-03-30 00:33:25.685018 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:33:25.685029 | orchestrator | Monday 30 March 2026 00:33:25 +0000 (0:00:01.222) 0:08:11.990 ********** 2026-03-30 00:33:25.685040 | orchestrator | =============================================================================== 2026-03-30 00:33:25.685050 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.64s 2026-03-30 00:33:25.685060 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.70s 2026-03-30 00:33:25.685070 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.77s 2026-03-30 00:33:25.685098 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.69s 2026-03-30 00:33:25.685109 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.40s 2026-03-30 00:33:25.685121 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.38s 2026-03-30 00:33:25.685130 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.06s 2026-03-30 00:33:25.685140 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.68s 2026-03-30 00:33:25.685150 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.83s 2026-03-30 00:33:25.685159 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.43s 2026-03-30 00:33:25.685168 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.29s 2026-03-30 00:33:25.685178 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.04s 2026-03-30 00:33:25.685187 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.84s 2026-03-30 00:33:25.685198 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.54s 2026-03-30 00:33:25.685209 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.38s 2026-03-30 00:33:25.685219 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.63s 2026-03-30 00:33:25.685229 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.87s 2026-03-30 00:33:25.685237 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.43s 2026-03-30 00:33:25.685243 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.76s 2026-03-30 00:33:25.685248 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.00s 2026-03-30 00:33:25.864400 | orchestrator | + osism apply fail2ban 2026-03-30 00:33:37.505670 | orchestrator | 2026-03-30 00:33:37 | INFO  | Prepare task for execution of fail2ban. 2026-03-30 00:33:37.621785 | orchestrator | 2026-03-30 00:33:37 | INFO  | Task 6ce0966e-c5b5-4cab-a4ce-bae1cda9c9d3 (fail2ban) was prepared for execution. 2026-03-30 00:33:37.621890 | orchestrator | 2026-03-30 00:33:37 | INFO  | It takes a moment until task 6ce0966e-c5b5-4cab-a4ce-bae1cda9c9d3 (fail2ban) has been started and output is visible here. 2026-03-30 00:33:59.212598 | orchestrator | 2026-03-30 00:33:59.212697 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-30 00:33:59.212731 | orchestrator | 2026-03-30 00:33:59.212740 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-30 00:33:59.212747 | orchestrator | Monday 30 March 2026 00:33:41 +0000 (0:00:00.346) 0:00:00.346 ********** 2026-03-30 00:33:59.212756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:33:59.212765 | orchestrator | 2026-03-30 00:33:59.212772 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-30 00:33:59.212779 | orchestrator | Monday 30 March 2026 00:33:42 +0000 (0:00:01.178) 0:00:01.524 ********** 2026-03-30 00:33:59.212786 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:59.212794 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:59.212800 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:59.212807 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:59.212813 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:59.212820 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:59.212826 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:59.212833 | orchestrator | 2026-03-30 00:33:59.212839 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-30 00:33:59.212846 | orchestrator | Monday 30 March 2026 00:33:54 +0000 (0:00:11.850) 0:00:13.375 ********** 2026-03-30 00:33:59.212852 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:59.212859 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:59.212865 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:59.212872 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:59.212878 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:59.212885 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:59.212891 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:59.212898 | orchestrator | 2026-03-30 00:33:59.212904 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-30 00:33:59.212911 | orchestrator | Monday 30 March 2026 00:33:55 +0000 (0:00:01.583) 0:00:14.959 ********** 2026-03-30 00:33:59.212917 | orchestrator | ok: [testbed-manager] 2026-03-30 00:33:59.212925 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:33:59.212932 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:33:59.212938 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:33:59.212945 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:33:59.212951 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:33:59.212958 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:33:59.212964 | orchestrator | 2026-03-30 00:33:59.212971 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-30 00:33:59.212977 | orchestrator | Monday 30 March 2026 00:33:57 +0000 (0:00:01.323) 0:00:16.282 ********** 2026-03-30 00:33:59.212984 | orchestrator | changed: [testbed-manager] 2026-03-30 00:33:59.212991 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:33:59.212997 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:33:59.213004 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:33:59.213010 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:33:59.213017 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:33:59.213023 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:33:59.213030 | orchestrator | 2026-03-30 00:33:59.213036 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:33:59.213055 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213063 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213070 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213076 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213089 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213095 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213102 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:33:59.213109 | orchestrator | 2026-03-30 00:33:59.213116 | orchestrator | 2026-03-30 00:33:59.213123 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:33:59.213131 | orchestrator | Monday 30 March 2026 00:33:58 +0000 (0:00:01.603) 0:00:17.886 ********** 2026-03-30 00:33:59.213139 | orchestrator | =============================================================================== 2026-03-30 00:33:59.213146 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.85s 2026-03-30 00:33:59.213154 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.60s 2026-03-30 00:33:59.213161 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.58s 2026-03-30 00:33:59.213169 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.32s 2026-03-30 00:33:59.213176 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.18s 2026-03-30 00:33:59.378169 | orchestrator | + osism apply network 2026-03-30 00:34:10.630122 | orchestrator | 2026-03-30 00:34:10 | INFO  | Prepare task for execution of network. 2026-03-30 00:34:10.715339 | orchestrator | 2026-03-30 00:34:10 | INFO  | Task 7b4d1704-24fe-4a39-8165-028fe3067ac0 (network) was prepared for execution. 2026-03-30 00:34:10.715426 | orchestrator | 2026-03-30 00:34:10 | INFO  | It takes a moment until task 7b4d1704-24fe-4a39-8165-028fe3067ac0 (network) has been started and output is visible here. 2026-03-30 00:34:39.550845 | orchestrator | 2026-03-30 00:34:39.551031 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-30 00:34:39.551060 | orchestrator | 2026-03-30 00:34:39.551079 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-30 00:34:39.551098 | orchestrator | Monday 30 March 2026 00:34:14 +0000 (0:00:00.330) 0:00:00.330 ********** 2026-03-30 00:34:39.551117 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.551138 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.551157 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.551175 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.551194 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.551212 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:39.551231 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:39.551249 | orchestrator | 2026-03-30 00:34:39.551267 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-30 00:34:39.551279 | orchestrator | Monday 30 March 2026 00:34:14 +0000 (0:00:00.606) 0:00:00.937 ********** 2026-03-30 00:34:39.551292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:34:39.551306 | orchestrator | 2026-03-30 00:34:39.551317 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-30 00:34:39.551328 | orchestrator | Monday 30 March 2026 00:34:15 +0000 (0:00:01.126) 0:00:02.063 ********** 2026-03-30 00:34:39.551339 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.551352 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.551365 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.551377 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.551389 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:39.551401 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.551442 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:39.551553 | orchestrator | 2026-03-30 00:34:39.551574 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-30 00:34:39.551593 | orchestrator | Monday 30 March 2026 00:34:18 +0000 (0:00:02.765) 0:00:04.829 ********** 2026-03-30 00:34:39.551611 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.551629 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.551647 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.551665 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.551683 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:39.551702 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.551721 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:39.551740 | orchestrator | 2026-03-30 00:34:39.551758 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-30 00:34:39.551774 | orchestrator | Monday 30 March 2026 00:34:20 +0000 (0:00:01.524) 0:00:06.354 ********** 2026-03-30 00:34:39.551785 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-30 00:34:39.551801 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-30 00:34:39.551819 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-30 00:34:39.551838 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-30 00:34:39.551855 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-30 00:34:39.551872 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-30 00:34:39.551890 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-30 00:34:39.551909 | orchestrator | 2026-03-30 00:34:39.551927 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-03-30 00:34:39.551947 | orchestrator | Monday 30 March 2026 00:34:21 +0000 (0:00:01.045) 0:00:07.399 ********** 2026-03-30 00:34:39.551966 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:34:39.551986 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:34:39.552004 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:34:39.552020 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:34:39.552031 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:34:39.552042 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:34:39.552052 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:34:39.552082 | orchestrator | 2026-03-30 00:34:39.552105 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-03-30 00:34:39.552117 | orchestrator | Monday 30 March 2026 00:34:21 +0000 (0:00:00.566) 0:00:07.966 ********** 2026-03-30 00:34:39.552128 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:34:39.552143 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:34:39.552161 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:34:39.552180 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:34:39.552196 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:34:39.552214 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:34:39.552232 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:34:39.552252 | orchestrator | 2026-03-30 00:34:39.552296 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-03-30 00:34:39.552317 | orchestrator | Monday 30 March 2026 00:34:22 +0000 (0:00:00.668) 0:00:08.635 ********** 2026-03-30 00:34:39.552337 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:34:39.552354 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:34:39.552373 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:34:39.552393 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:34:39.552411 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:34:39.552430 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:34:39.552442 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:34:39.552479 | orchestrator | 2026-03-30 00:34:39.552492 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-30 00:34:39.552503 | orchestrator | Monday 30 March 2026 00:34:23 +0000 (0:00:00.729) 0:00:09.365 ********** 2026-03-30 00:34:39.552514 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 00:34:39.552538 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-30 00:34:39.552550 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-30 00:34:39.552560 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:34:39.552571 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-30 00:34:39.552581 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 00:34:39.552592 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-30 00:34:39.552608 | orchestrator | 2026-03-30 00:34:39.552657 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-30 00:34:39.552676 | orchestrator | Monday 30 March 2026 00:34:26 +0000 (0:00:03.244) 0:00:12.610 ********** 2026-03-30 00:34:39.552693 | orchestrator | changed: [testbed-manager] 2026-03-30 00:34:39.552712 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:34:39.552732 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:34:39.552751 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:34:39.552769 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:34:39.552788 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:34:39.552807 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:34:39.552825 | orchestrator | 2026-03-30 00:34:39.552844 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-30 00:34:39.552862 | orchestrator | Monday 30 March 2026 00:34:27 +0000 (0:00:01.619) 0:00:14.230 ********** 2026-03-30 00:34:39.552879 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 00:34:39.552897 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-30 00:34:39.552916 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:34:39.552934 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 00:34:39.552953 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-30 00:34:39.552972 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-30 00:34:39.552990 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-30 00:34:39.553009 | orchestrator | 2026-03-30 00:34:39.553027 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-30 00:34:39.553045 | orchestrator | Monday 30 March 2026 00:34:29 +0000 (0:00:01.814) 0:00:16.044 ********** 2026-03-30 00:34:39.553065 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.553084 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.553102 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.553121 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.553139 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.553158 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:39.553175 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:39.553194 | orchestrator | 2026-03-30 00:34:39.553213 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-30 00:34:39.553232 | orchestrator | Monday 30 March 2026 00:34:30 +0000 (0:00:01.125) 0:00:17.170 ********** 2026-03-30 00:34:39.553250 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:34:39.553268 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:34:39.553287 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:34:39.553304 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:34:39.553322 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:34:39.553341 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:34:39.553359 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:34:39.553377 | orchestrator | 2026-03-30 00:34:39.553396 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-30 00:34:39.553414 | orchestrator | Monday 30 March 2026 00:34:31 +0000 (0:00:00.632) 0:00:17.802 ********** 2026-03-30 00:34:39.553433 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.553490 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.553512 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.553532 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.553550 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.553568 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:39.553598 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:39.553617 | orchestrator | 2026-03-30 00:34:39.553636 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-30 00:34:39.553668 | orchestrator | Monday 30 March 2026 00:34:33 +0000 (0:00:02.468) 0:00:20.270 ********** 2026-03-30 00:34:39.553686 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:34:39.553705 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:34:39.553723 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:34:39.553742 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:34:39.553761 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:34:39.553779 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:34:39.553797 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-30 00:34:39.553817 | orchestrator | 2026-03-30 00:34:39.553835 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-30 00:34:39.553855 | orchestrator | Monday 30 March 2026 00:34:34 +0000 (0:00:00.884) 0:00:21.155 ********** 2026-03-30 00:34:39.553873 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.553891 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:34:39.553911 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:34:39.553929 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:34:39.553948 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:34:39.553967 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:34:39.553985 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:34:39.554003 | orchestrator | 2026-03-30 00:34:39.554111 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-30 00:34:39.554135 | orchestrator | Monday 30 March 2026 00:34:36 +0000 (0:00:01.740) 0:00:22.895 ********** 2026-03-30 00:34:39.554155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:34:39.554178 | orchestrator | 2026-03-30 00:34:39.554198 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-30 00:34:39.554216 | orchestrator | Monday 30 March 2026 00:34:37 +0000 (0:00:01.223) 0:00:24.119 ********** 2026-03-30 00:34:39.554234 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.554253 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.554273 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.554290 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.554306 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.554317 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:39.554328 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:39.554338 | orchestrator | 2026-03-30 00:34:39.554349 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-30 00:34:39.554360 | orchestrator | Monday 30 March 2026 00:34:39 +0000 (0:00:01.174) 0:00:25.293 ********** 2026-03-30 00:34:39.554371 | orchestrator | ok: [testbed-manager] 2026-03-30 00:34:39.554382 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:34:39.554392 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:34:39.554403 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:34:39.554413 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:34:39.554437 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:34:56.101252 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:34:56.101334 | orchestrator | 2026-03-30 00:34:56.101342 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-30 00:34:56.101350 | orchestrator | Monday 30 March 2026 00:34:39 +0000 (0:00:00.647) 0:00:25.941 ********** 2026-03-30 00:34:56.101356 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101362 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101385 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101391 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101396 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101468 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101475 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101481 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101486 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101491 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101496 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101501 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-30 00:34:56.101506 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101511 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-30 00:34:56.101516 | orchestrator | 2026-03-30 00:34:56.101521 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-30 00:34:56.101526 | orchestrator | Monday 30 March 2026 00:34:40 +0000 (0:00:01.288) 0:00:27.229 ********** 2026-03-30 00:34:56.101532 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:34:56.101537 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:34:56.101542 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:34:56.101547 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:34:56.101552 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:34:56.101557 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:34:56.101562 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:34:56.101567 | orchestrator | 2026-03-30 00:34:56.101572 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-30 00:34:56.101577 | orchestrator | Monday 30 March 2026 00:34:41 +0000 (0:00:00.628) 0:00:27.858 ********** 2026-03-30 00:34:56.101594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-2, testbed-node-5, testbed-node-3, testbed-node-4 2026-03-30 00:34:56.101602 | orchestrator | 2026-03-30 00:34:56.101607 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-30 00:34:56.101612 | orchestrator | Monday 30 March 2026 00:34:45 +0000 (0:00:04.350) 0:00:32.208 ********** 2026-03-30 00:34:56.101618 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-30 00:34:56.101625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101647 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-30 00:34:56.101668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-30 00:34:56.101689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-30 00:34:56.101694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-30 00:34:56.101700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-30 00:34:56.101705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-30 00:34:56.101710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-30 00:34:56.101715 | orchestrator | 2026-03-30 00:34:56.101723 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-30 00:34:56.101729 | orchestrator | Monday 30 March 2026 00:34:51 +0000 (0:00:05.756) 0:00:37.965 ********** 2026-03-30 00:34:56.101734 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-30 00:34:56.101739 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-30 00:34:56.101745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:34:56.101781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-30 00:35:07.362681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-30 00:35:07.362769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-30 00:35:07.362778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-30 00:35:07.362786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-30 00:35:07.362794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-30 00:35:07.362801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-30 00:35:07.362810 | orchestrator | 2026-03-30 00:35:07.362819 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-30 00:35:07.362829 | orchestrator | Monday 30 March 2026 00:34:57 +0000 (0:00:05.318) 0:00:43.283 ********** 2026-03-30 00:35:07.362847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:35:07.362853 | orchestrator | 2026-03-30 00:35:07.362857 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-30 00:35:07.362862 | orchestrator | Monday 30 March 2026 00:34:58 +0000 (0:00:01.337) 0:00:44.621 ********** 2026-03-30 00:35:07.362867 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:07.362872 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:35:07.362877 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:35:07.362882 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:35:07.362887 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:35:07.362891 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:35:07.362896 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:35:07.362900 | orchestrator | 2026-03-30 00:35:07.362919 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-30 00:35:07.362924 | orchestrator | Monday 30 March 2026 00:34:59 +0000 (0:00:00.941) 0:00:45.562 ********** 2026-03-30 00:35:07.362929 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.362934 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.362939 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.362943 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.362948 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:35:07.362954 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.362958 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.362963 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.362967 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.362972 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:35:07.362976 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.362981 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.362985 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.362990 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.362994 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:35:07.362999 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.363003 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.363008 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.363023 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.363028 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:35:07.363032 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.363037 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.363041 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.363046 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.363050 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.363055 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.363059 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.363064 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.363068 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:35:07.363073 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:35:07.363077 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-30 00:35:07.363082 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-30 00:35:07.363086 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-30 00:35:07.363091 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-30 00:35:07.363095 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:35:07.363100 | orchestrator | 2026-03-30 00:35:07.363104 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-30 00:35:07.363113 | orchestrator | Monday 30 March 2026 00:35:00 +0000 (0:00:00.922) 0:00:46.485 ********** 2026-03-30 00:35:07.363118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:35:07.363123 | orchestrator | 2026-03-30 00:35:07.363127 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-30 00:35:07.363132 | orchestrator | Monday 30 March 2026 00:35:01 +0000 (0:00:01.266) 0:00:47.752 ********** 2026-03-30 00:35:07.363136 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:35:07.363144 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:35:07.363148 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:35:07.363153 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:35:07.363158 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:35:07.363162 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:35:07.363167 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:35:07.363171 | orchestrator | 2026-03-30 00:35:07.363176 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-30 00:35:07.363180 | orchestrator | Monday 30 March 2026 00:35:02 +0000 (0:00:00.605) 0:00:48.358 ********** 2026-03-30 00:35:07.363185 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:35:07.363189 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:35:07.363194 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:35:07.363198 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:35:07.363203 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:35:07.363207 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:35:07.363212 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:35:07.363216 | orchestrator | 2026-03-30 00:35:07.363221 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-30 00:35:07.363225 | orchestrator | Monday 30 March 2026 00:35:02 +0000 (0:00:00.773) 0:00:49.131 ********** 2026-03-30 00:35:07.363231 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:35:07.363236 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:35:07.363241 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:35:07.363246 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:35:07.363252 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:35:07.363257 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:35:07.363262 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:35:07.363268 | orchestrator | 2026-03-30 00:35:07.363273 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-30 00:35:07.363280 | orchestrator | Monday 30 March 2026 00:35:03 +0000 (0:00:00.624) 0:00:49.755 ********** 2026-03-30 00:35:07.363288 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:35:07.363300 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:35:07.363310 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:07.363317 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:35:07.363323 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:35:07.363330 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:35:07.363337 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:35:07.363344 | orchestrator | 2026-03-30 00:35:07.363350 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-30 00:35:07.363357 | orchestrator | Monday 30 March 2026 00:35:05 +0000 (0:00:01.790) 0:00:51.546 ********** 2026-03-30 00:35:07.363364 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:07.363372 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:35:07.363379 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:35:07.363387 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:35:07.363394 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:35:07.363418 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:35:07.363424 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:35:07.363428 | orchestrator | 2026-03-30 00:35:07.363434 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-30 00:35:07.363441 | orchestrator | Monday 30 March 2026 00:35:06 +0000 (0:00:01.138) 0:00:52.685 ********** 2026-03-30 00:35:07.363455 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:07.363460 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:35:07.363464 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:35:07.363469 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:35:07.363473 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:35:07.363478 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:35:07.363487 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:35:10.119229 | orchestrator | 2026-03-30 00:35:10.119336 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-30 00:35:10.119354 | orchestrator | Monday 30 March 2026 00:35:08 +0000 (0:00:02.108) 0:00:54.793 ********** 2026-03-30 00:35:10.119367 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:35:10.119380 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:35:10.119391 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:35:10.119486 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:35:10.119499 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:35:10.119510 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:35:10.119521 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:35:10.119532 | orchestrator | 2026-03-30 00:35:10.119543 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-30 00:35:10.119554 | orchestrator | Monday 30 March 2026 00:35:09 +0000 (0:00:00.795) 0:00:55.589 ********** 2026-03-30 00:35:10.119565 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:35:10.119576 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:35:10.119587 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:35:10.119598 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:35:10.119609 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:35:10.119619 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:35:10.119630 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:35:10.119641 | orchestrator | 2026-03-30 00:35:10.119652 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:35:10.119664 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-30 00:35:10.119677 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:35:10.119688 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:35:10.119699 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:35:10.119710 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:35:10.119721 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:35:10.119733 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:35:10.119743 | orchestrator | 2026-03-30 00:35:10.119761 | orchestrator | 2026-03-30 00:35:10.119774 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:35:10.119788 | orchestrator | Monday 30 March 2026 00:35:09 +0000 (0:00:00.510) 0:00:56.099 ********** 2026-03-30 00:35:10.119800 | orchestrator | =============================================================================== 2026-03-30 00:35:10.119813 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.76s 2026-03-30 00:35:10.119826 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.32s 2026-03-30 00:35:10.119838 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.35s 2026-03-30 00:35:10.119877 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.24s 2026-03-30 00:35:10.119888 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.77s 2026-03-30 00:35:10.119899 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.47s 2026-03-30 00:35:10.119910 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.11s 2026-03-30 00:35:10.119921 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-03-30 00:35:10.119932 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.79s 2026-03-30 00:35:10.119943 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.74s 2026-03-30 00:35:10.119954 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.62s 2026-03-30 00:35:10.119964 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.52s 2026-03-30 00:35:10.119975 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.34s 2026-03-30 00:35:10.119986 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-03-30 00:35:10.119996 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.27s 2026-03-30 00:35:10.120007 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-03-30 00:35:10.120018 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2026-03-30 00:35:10.120029 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.14s 2026-03-30 00:35:10.120040 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.13s 2026-03-30 00:35:10.120051 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2026-03-30 00:35:10.346312 | orchestrator | + osism apply wireguard 2026-03-30 00:35:21.728891 | orchestrator | 2026-03-30 00:35:21 | INFO  | Prepare task for execution of wireguard. 2026-03-30 00:35:21.801826 | orchestrator | 2026-03-30 00:35:21 | INFO  | Task 0a87db57-542b-46b8-87bc-53b5774bd263 (wireguard) was prepared for execution. 2026-03-30 00:35:21.801962 | orchestrator | 2026-03-30 00:35:21 | INFO  | It takes a moment until task 0a87db57-542b-46b8-87bc-53b5774bd263 (wireguard) has been started and output is visible here. 2026-03-30 00:35:39.793323 | orchestrator | 2026-03-30 00:35:39.793465 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-30 00:35:39.793480 | orchestrator | 2026-03-30 00:35:39.793490 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-30 00:35:39.793499 | orchestrator | Monday 30 March 2026 00:35:25 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-03-30 00:35:39.793510 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:39.793520 | orchestrator | 2026-03-30 00:35:39.793529 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-30 00:35:39.793538 | orchestrator | Monday 30 March 2026 00:35:26 +0000 (0:00:01.838) 0:00:02.130 ********** 2026-03-30 00:35:39.793546 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793556 | orchestrator | 2026-03-30 00:35:39.793565 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-30 00:35:39.793574 | orchestrator | Monday 30 March 2026 00:35:33 +0000 (0:00:06.186) 0:00:08.316 ********** 2026-03-30 00:35:39.793582 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793591 | orchestrator | 2026-03-30 00:35:39.793600 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-30 00:35:39.793626 | orchestrator | Monday 30 March 2026 00:35:33 +0000 (0:00:00.482) 0:00:08.799 ********** 2026-03-30 00:35:39.793636 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793644 | orchestrator | 2026-03-30 00:35:39.793653 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-30 00:35:39.793662 | orchestrator | Monday 30 March 2026 00:35:33 +0000 (0:00:00.391) 0:00:09.191 ********** 2026-03-30 00:35:39.793690 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:39.793699 | orchestrator | 2026-03-30 00:35:39.793708 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-30 00:35:39.793717 | orchestrator | Monday 30 March 2026 00:35:34 +0000 (0:00:00.499) 0:00:09.690 ********** 2026-03-30 00:35:39.793726 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:39.793734 | orchestrator | 2026-03-30 00:35:39.793743 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-30 00:35:39.793751 | orchestrator | Monday 30 March 2026 00:35:34 +0000 (0:00:00.439) 0:00:10.129 ********** 2026-03-30 00:35:39.793760 | orchestrator | ok: [testbed-manager] 2026-03-30 00:35:39.793769 | orchestrator | 2026-03-30 00:35:39.793777 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-30 00:35:39.793791 | orchestrator | Monday 30 March 2026 00:35:35 +0000 (0:00:00.370) 0:00:10.500 ********** 2026-03-30 00:35:39.793800 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793808 | orchestrator | 2026-03-30 00:35:39.793817 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-30 00:35:39.793826 | orchestrator | Monday 30 March 2026 00:35:36 +0000 (0:00:01.014) 0:00:11.514 ********** 2026-03-30 00:35:39.793834 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-30 00:35:39.793843 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793852 | orchestrator | 2026-03-30 00:35:39.793860 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-30 00:35:39.793869 | orchestrator | Monday 30 March 2026 00:35:37 +0000 (0:00:00.857) 0:00:12.372 ********** 2026-03-30 00:35:39.793878 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793888 | orchestrator | 2026-03-30 00:35:39.793898 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-30 00:35:39.793908 | orchestrator | Monday 30 March 2026 00:35:38 +0000 (0:00:01.723) 0:00:14.096 ********** 2026-03-30 00:35:39.793918 | orchestrator | changed: [testbed-manager] 2026-03-30 00:35:39.793928 | orchestrator | 2026-03-30 00:35:39.793938 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:35:39.793948 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:35:39.793959 | orchestrator | 2026-03-30 00:35:39.793969 | orchestrator | 2026-03-30 00:35:39.793979 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:35:39.793989 | orchestrator | Monday 30 March 2026 00:35:39 +0000 (0:00:00.784) 0:00:14.880 ********** 2026-03-30 00:35:39.793999 | orchestrator | =============================================================================== 2026-03-30 00:35:39.794008 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.19s 2026-03-30 00:35:39.794057 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.84s 2026-03-30 00:35:39.794067 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.72s 2026-03-30 00:35:39.794078 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.01s 2026-03-30 00:35:39.794088 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.86s 2026-03-30 00:35:39.794100 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.78s 2026-03-30 00:35:39.794109 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2026-03-30 00:35:39.794119 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.48s 2026-03-30 00:35:39.794129 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-03-30 00:35:39.794140 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2026-03-30 00:35:39.794150 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2026-03-30 00:35:39.914823 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-30 00:35:39.950150 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-30 00:35:39.950270 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-30 00:35:40.030288 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 190 0 --:--:-- --:--:-- --:--:-- 192 2026-03-30 00:35:40.042187 | orchestrator | + osism apply --environment custom workarounds 2026-03-30 00:35:41.210874 | orchestrator | 2026-03-30 00:35:41 | INFO  | Trying to run play workarounds in environment custom 2026-03-30 00:35:51.247253 | orchestrator | 2026-03-30 00:35:51 | INFO  | Prepare task for execution of workarounds. 2026-03-30 00:35:51.325003 | orchestrator | 2026-03-30 00:35:51 | INFO  | Task 30a0dbef-5f2d-4eaa-a198-00bb119f1a6f (workarounds) was prepared for execution. 2026-03-30 00:35:51.325106 | orchestrator | 2026-03-30 00:35:51 | INFO  | It takes a moment until task 30a0dbef-5f2d-4eaa-a198-00bb119f1a6f (workarounds) has been started and output is visible here. 2026-03-30 00:36:16.418557 | orchestrator | 2026-03-30 00:36:16.418687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:36:16.418711 | orchestrator | 2026-03-30 00:36:16.418727 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-30 00:36:16.418743 | orchestrator | Monday 30 March 2026 00:35:54 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-03-30 00:36:16.418758 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418772 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418786 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418799 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418813 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418828 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418841 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-30 00:36:16.418856 | orchestrator | 2026-03-30 00:36:16.418869 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-30 00:36:16.418883 | orchestrator | 2026-03-30 00:36:16.418896 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-30 00:36:16.418910 | orchestrator | Monday 30 March 2026 00:35:55 +0000 (0:00:00.755) 0:00:00.935 ********** 2026-03-30 00:36:16.418924 | orchestrator | ok: [testbed-manager] 2026-03-30 00:36:16.418940 | orchestrator | 2026-03-30 00:36:16.418973 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-30 00:36:16.418987 | orchestrator | 2026-03-30 00:36:16.419000 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-30 00:36:16.419013 | orchestrator | Monday 30 March 2026 00:35:58 +0000 (0:00:02.673) 0:00:03.609 ********** 2026-03-30 00:36:16.419027 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:36:16.419040 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:36:16.419053 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:36:16.419066 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:36:16.419080 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:36:16.419094 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:36:16.419107 | orchestrator | 2026-03-30 00:36:16.419121 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-30 00:36:16.419135 | orchestrator | 2026-03-30 00:36:16.419149 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-30 00:36:16.419163 | orchestrator | Monday 30 March 2026 00:36:00 +0000 (0:00:02.305) 0:00:05.914 ********** 2026-03-30 00:36:16.419180 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-30 00:36:16.419195 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-30 00:36:16.419210 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-30 00:36:16.419250 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-30 00:36:16.419265 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-30 00:36:16.419370 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-30 00:36:16.419387 | orchestrator | 2026-03-30 00:36:16.419402 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-30 00:36:16.419417 | orchestrator | Monday 30 March 2026 00:36:01 +0000 (0:00:01.340) 0:00:07.255 ********** 2026-03-30 00:36:16.419431 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:36:16.419444 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:36:16.419457 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:36:16.419470 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:36:16.419483 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:36:16.419496 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:36:16.419509 | orchestrator | 2026-03-30 00:36:16.419522 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-30 00:36:16.419535 | orchestrator | Monday 30 March 2026 00:36:05 +0000 (0:00:04.053) 0:00:11.308 ********** 2026-03-30 00:36:16.419547 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:36:16.419595 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:36:16.419610 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:36:16.419623 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:36:16.419636 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:36:16.419649 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:36:16.419663 | orchestrator | 2026-03-30 00:36:16.419677 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-30 00:36:16.419691 | orchestrator | 2026-03-30 00:36:16.419704 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-30 00:36:16.419718 | orchestrator | Monday 30 March 2026 00:36:06 +0000 (0:00:00.551) 0:00:11.859 ********** 2026-03-30 00:36:16.419731 | orchestrator | changed: [testbed-manager] 2026-03-30 00:36:16.419745 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:36:16.419759 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:36:16.419772 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:36:16.419785 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:36:16.419800 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:36:16.419814 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:36:16.419829 | orchestrator | 2026-03-30 00:36:16.419843 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-30 00:36:16.419857 | orchestrator | Monday 30 March 2026 00:36:08 +0000 (0:00:01.741) 0:00:13.601 ********** 2026-03-30 00:36:16.419872 | orchestrator | changed: [testbed-manager] 2026-03-30 00:36:16.419888 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:36:16.419903 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:36:16.419918 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:36:16.419933 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:36:16.419948 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:36:16.419988 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:36:16.420004 | orchestrator | 2026-03-30 00:36:16.420019 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-30 00:36:16.420034 | orchestrator | Monday 30 March 2026 00:36:09 +0000 (0:00:01.518) 0:00:15.119 ********** 2026-03-30 00:36:16.420050 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:36:16.420065 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:36:16.420079 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:36:16.420094 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:36:16.420108 | orchestrator | ok: [testbed-manager] 2026-03-30 00:36:16.420123 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:36:16.420137 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:36:16.420152 | orchestrator | 2026-03-30 00:36:16.420183 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-30 00:36:16.420197 | orchestrator | Monday 30 March 2026 00:36:11 +0000 (0:00:01.633) 0:00:16.752 ********** 2026-03-30 00:36:16.420211 | orchestrator | changed: [testbed-manager] 2026-03-30 00:36:16.420225 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:36:16.420240 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:36:16.420254 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:36:16.420269 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:36:16.420309 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:36:16.420323 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:36:16.420337 | orchestrator | 2026-03-30 00:36:16.420351 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-30 00:36:16.420366 | orchestrator | Monday 30 March 2026 00:36:12 +0000 (0:00:01.720) 0:00:18.473 ********** 2026-03-30 00:36:16.420380 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:36:16.420404 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:36:16.420419 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:36:16.420433 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:36:16.420448 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:36:16.420462 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:36:16.420476 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:36:16.420491 | orchestrator | 2026-03-30 00:36:16.420505 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-30 00:36:16.420520 | orchestrator | 2026-03-30 00:36:16.420535 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-30 00:36:16.420549 | orchestrator | Monday 30 March 2026 00:36:13 +0000 (0:00:00.760) 0:00:19.233 ********** 2026-03-30 00:36:16.420562 | orchestrator | ok: [testbed-manager] 2026-03-30 00:36:16.420576 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:36:16.420591 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:36:16.420605 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:36:16.420620 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:36:16.420634 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:36:16.420648 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:36:16.420662 | orchestrator | 2026-03-30 00:36:16.420676 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:36:16.420692 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:36:16.420709 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:16.420724 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:16.420739 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:16.420754 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:16.420768 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:16.420783 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:16.420798 | orchestrator | 2026-03-30 00:36:16.420812 | orchestrator | 2026-03-30 00:36:16.420826 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:36:16.420839 | orchestrator | Monday 30 March 2026 00:36:16 +0000 (0:00:02.757) 0:00:21.991 ********** 2026-03-30 00:36:16.420853 | orchestrator | =============================================================================== 2026-03-30 00:36:16.420880 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.05s 2026-03-30 00:36:16.420894 | orchestrator | Install python3-docker -------------------------------------------------- 2.76s 2026-03-30 00:36:16.420908 | orchestrator | Apply netplan configuration --------------------------------------------- 2.67s 2026-03-30 00:36:16.420924 | orchestrator | Apply netplan configuration --------------------------------------------- 2.30s 2026-03-30 00:36:16.420939 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.74s 2026-03-30 00:36:16.420953 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2026-03-30 00:36:16.420967 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2026-03-30 00:36:16.420982 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.52s 2026-03-30 00:36:16.420998 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.34s 2026-03-30 00:36:16.421012 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.76s 2026-03-30 00:36:16.421025 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2026-03-30 00:36:16.421053 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.55s 2026-03-30 00:36:16.899432 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-30 00:36:28.491960 | orchestrator | 2026-03-30 00:36:28 | INFO  | Prepare task for execution of reboot. 2026-03-30 00:36:28.572836 | orchestrator | 2026-03-30 00:36:28 | INFO  | Task bdbf75cc-4d92-4fa5-b14b-68258bfd4d99 (reboot) was prepared for execution. 2026-03-30 00:36:28.572989 | orchestrator | 2026-03-30 00:36:28 | INFO  | It takes a moment until task bdbf75cc-4d92-4fa5-b14b-68258bfd4d99 (reboot) has been started and output is visible here. 2026-03-30 00:36:39.842961 | orchestrator | 2026-03-30 00:36:39.843068 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-30 00:36:39.843081 | orchestrator | 2026-03-30 00:36:39.843090 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-30 00:36:39.843099 | orchestrator | Monday 30 March 2026 00:36:31 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-03-30 00:36:39.843107 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:36:39.843117 | orchestrator | 2026-03-30 00:36:39.843125 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-30 00:36:39.843134 | orchestrator | Monday 30 March 2026 00:36:31 +0000 (0:00:00.127) 0:00:00.370 ********** 2026-03-30 00:36:39.843142 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:36:39.843150 | orchestrator | 2026-03-30 00:36:39.843173 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-30 00:36:39.843181 | orchestrator | Monday 30 March 2026 00:36:33 +0000 (0:00:01.213) 0:00:01.584 ********** 2026-03-30 00:36:39.843189 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:36:39.843198 | orchestrator | 2026-03-30 00:36:39.843205 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-30 00:36:39.843213 | orchestrator | 2026-03-30 00:36:39.843221 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-30 00:36:39.843229 | orchestrator | Monday 30 March 2026 00:36:33 +0000 (0:00:00.110) 0:00:01.695 ********** 2026-03-30 00:36:39.843275 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:36:39.843283 | orchestrator | 2026-03-30 00:36:39.843291 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-30 00:36:39.843300 | orchestrator | Monday 30 March 2026 00:36:33 +0000 (0:00:00.112) 0:00:01.808 ********** 2026-03-30 00:36:39.843307 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:36:39.843315 | orchestrator | 2026-03-30 00:36:39.843324 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-30 00:36:39.843332 | orchestrator | Monday 30 March 2026 00:36:34 +0000 (0:00:01.050) 0:00:02.858 ********** 2026-03-30 00:36:39.843340 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:36:39.843349 | orchestrator | 2026-03-30 00:36:39.843376 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-30 00:36:39.843386 | orchestrator | 2026-03-30 00:36:39.843394 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-30 00:36:39.843402 | orchestrator | Monday 30 March 2026 00:36:34 +0000 (0:00:00.107) 0:00:02.966 ********** 2026-03-30 00:36:39.843410 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:36:39.843418 | orchestrator | 2026-03-30 00:36:39.843425 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-30 00:36:39.843433 | orchestrator | Monday 30 March 2026 00:36:34 +0000 (0:00:00.100) 0:00:03.066 ********** 2026-03-30 00:36:39.843441 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:36:39.843449 | orchestrator | 2026-03-30 00:36:39.843457 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-30 00:36:39.843466 | orchestrator | Monday 30 March 2026 00:36:35 +0000 (0:00:01.097) 0:00:04.164 ********** 2026-03-30 00:36:39.843474 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:36:39.843482 | orchestrator | 2026-03-30 00:36:39.843490 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-30 00:36:39.843498 | orchestrator | 2026-03-30 00:36:39.843506 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-30 00:36:39.843514 | orchestrator | Monday 30 March 2026 00:36:35 +0000 (0:00:00.115) 0:00:04.280 ********** 2026-03-30 00:36:39.843523 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:36:39.843532 | orchestrator | 2026-03-30 00:36:39.843541 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-30 00:36:39.843551 | orchestrator | Monday 30 March 2026 00:36:35 +0000 (0:00:00.110) 0:00:04.391 ********** 2026-03-30 00:36:39.843560 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:36:39.843569 | orchestrator | 2026-03-30 00:36:39.843579 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-30 00:36:39.843588 | orchestrator | Monday 30 March 2026 00:36:36 +0000 (0:00:00.986) 0:00:05.377 ********** 2026-03-30 00:36:39.843597 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:36:39.843606 | orchestrator | 2026-03-30 00:36:39.843615 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-30 00:36:39.843624 | orchestrator | 2026-03-30 00:36:39.843633 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-30 00:36:39.843642 | orchestrator | Monday 30 March 2026 00:36:36 +0000 (0:00:00.111) 0:00:05.489 ********** 2026-03-30 00:36:39.843651 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:36:39.843660 | orchestrator | 2026-03-30 00:36:39.843669 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-30 00:36:39.843678 | orchestrator | Monday 30 March 2026 00:36:37 +0000 (0:00:00.227) 0:00:05.717 ********** 2026-03-30 00:36:39.843687 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:36:39.843696 | orchestrator | 2026-03-30 00:36:39.843705 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-30 00:36:39.843714 | orchestrator | Monday 30 March 2026 00:36:38 +0000 (0:00:01.041) 0:00:06.759 ********** 2026-03-30 00:36:39.843722 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:36:39.843732 | orchestrator | 2026-03-30 00:36:39.843759 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-30 00:36:39.843769 | orchestrator | 2026-03-30 00:36:39.843778 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-30 00:36:39.843787 | orchestrator | Monday 30 March 2026 00:36:38 +0000 (0:00:00.117) 0:00:06.876 ********** 2026-03-30 00:36:39.843797 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:36:39.843806 | orchestrator | 2026-03-30 00:36:39.843815 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-30 00:36:39.843825 | orchestrator | Monday 30 March 2026 00:36:38 +0000 (0:00:00.119) 0:00:06.995 ********** 2026-03-30 00:36:39.843834 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:36:39.843843 | orchestrator | 2026-03-30 00:36:39.843853 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-30 00:36:39.843869 | orchestrator | Monday 30 March 2026 00:36:39 +0000 (0:00:01.059) 0:00:08.055 ********** 2026-03-30 00:36:39.843891 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:36:39.843900 | orchestrator | 2026-03-30 00:36:39.843908 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:36:39.843917 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:39.843926 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:39.843939 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:39.843947 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:39.843955 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:39.843963 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:36:39.843971 | orchestrator | 2026-03-30 00:36:39.843979 | orchestrator | 2026-03-30 00:36:39.843986 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:36:39.844005 | orchestrator | Monday 30 March 2026 00:36:39 +0000 (0:00:00.040) 0:00:08.096 ********** 2026-03-30 00:36:39.844014 | orchestrator | =============================================================================== 2026-03-30 00:36:39.844021 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.45s 2026-03-30 00:36:39.844030 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2026-03-30 00:36:39.844037 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2026-03-30 00:36:40.033322 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-30 00:36:51.423813 | orchestrator | 2026-03-30 00:36:51 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-30 00:36:51.492290 | orchestrator | 2026-03-30 00:36:51 | INFO  | Task 1f5e0199-ef40-436c-a5c6-107d0fa46122 (wait-for-connection) was prepared for execution. 2026-03-30 00:36:51.492373 | orchestrator | 2026-03-30 00:36:51 | INFO  | It takes a moment until task 1f5e0199-ef40-436c-a5c6-107d0fa46122 (wait-for-connection) has been started and output is visible here. 2026-03-30 00:37:06.442398 | orchestrator | 2026-03-30 00:37:06.442524 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-30 00:37:06.442541 | orchestrator | 2026-03-30 00:37:06.442552 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-30 00:37:06.442563 | orchestrator | Monday 30 March 2026 00:36:54 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-03-30 00:37:06.442573 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:37:06.442584 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:37:06.442594 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:37:06.442604 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:37:06.442613 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:37:06.442624 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:37:06.442634 | orchestrator | 2026-03-30 00:37:06.442644 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:37:06.442654 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:06.442665 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:06.442702 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:06.442712 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:06.442722 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:06.442731 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:06.442741 | orchestrator | 2026-03-30 00:37:06.442750 | orchestrator | 2026-03-30 00:37:06.442760 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:37:06.442770 | orchestrator | Monday 30 March 2026 00:37:06 +0000 (0:00:11.591) 0:00:11.878 ********** 2026-03-30 00:37:06.442779 | orchestrator | =============================================================================== 2026-03-30 00:37:06.442789 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2026-03-30 00:37:06.681908 | orchestrator | + osism apply hddtemp 2026-03-30 00:37:18.027515 | orchestrator | 2026-03-30 00:37:18 | INFO  | Prepare task for execution of hddtemp. 2026-03-30 00:37:18.102611 | orchestrator | 2026-03-30 00:37:18 | INFO  | Task ce1954fd-2b37-432e-990a-3e2136c3ae87 (hddtemp) was prepared for execution. 2026-03-30 00:37:18.102705 | orchestrator | 2026-03-30 00:37:18 | INFO  | It takes a moment until task ce1954fd-2b37-432e-990a-3e2136c3ae87 (hddtemp) has been started and output is visible here. 2026-03-30 00:37:45.701494 | orchestrator | 2026-03-30 00:37:45.701639 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-30 00:37:45.701657 | orchestrator | 2026-03-30 00:37:45.701669 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-30 00:37:45.701682 | orchestrator | Monday 30 March 2026 00:37:21 +0000 (0:00:00.331) 0:00:00.331 ********** 2026-03-30 00:37:45.701694 | orchestrator | ok: [testbed-manager] 2026-03-30 00:37:45.701706 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:37:45.701718 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:37:45.701729 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:37:45.701741 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:37:45.701752 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:37:45.701780 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:37:45.701791 | orchestrator | 2026-03-30 00:37:45.701803 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-30 00:37:45.701814 | orchestrator | Monday 30 March 2026 00:37:21 +0000 (0:00:00.613) 0:00:00.944 ********** 2026-03-30 00:37:45.701828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:37:45.701842 | orchestrator | 2026-03-30 00:37:45.701854 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-30 00:37:45.701866 | orchestrator | Monday 30 March 2026 00:37:23 +0000 (0:00:01.107) 0:00:02.051 ********** 2026-03-30 00:37:45.701876 | orchestrator | ok: [testbed-manager] 2026-03-30 00:37:45.701887 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:37:45.701898 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:37:45.701909 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:37:45.701920 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:37:45.701930 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:37:45.701941 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:37:45.701952 | orchestrator | 2026-03-30 00:37:45.701963 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-30 00:37:45.701974 | orchestrator | Monday 30 March 2026 00:37:25 +0000 (0:00:02.605) 0:00:04.657 ********** 2026-03-30 00:37:45.701985 | orchestrator | changed: [testbed-manager] 2026-03-30 00:37:45.702082 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:37:45.702098 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:37:45.702110 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:37:45.702148 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:37:45.702206 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:37:45.702222 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:37:45.702234 | orchestrator | 2026-03-30 00:37:45.702247 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-30 00:37:45.702259 | orchestrator | Monday 30 March 2026 00:37:26 +0000 (0:00:00.892) 0:00:05.549 ********** 2026-03-30 00:37:45.702272 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:37:45.702284 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:37:45.702297 | orchestrator | ok: [testbed-manager] 2026-03-30 00:37:45.702309 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:37:45.702321 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:37:45.702333 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:37:45.702345 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:37:45.702357 | orchestrator | 2026-03-30 00:37:45.702370 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-30 00:37:45.702383 | orchestrator | Monday 30 March 2026 00:37:27 +0000 (0:00:01.205) 0:00:06.754 ********** 2026-03-30 00:37:45.702394 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:37:45.702404 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:37:45.702415 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:37:45.702425 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:37:45.702436 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:37:45.702447 | orchestrator | changed: [testbed-manager] 2026-03-30 00:37:45.702457 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:37:45.702468 | orchestrator | 2026-03-30 00:37:45.702479 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-30 00:37:45.702490 | orchestrator | Monday 30 March 2026 00:37:28 +0000 (0:00:00.538) 0:00:07.292 ********** 2026-03-30 00:37:45.702500 | orchestrator | changed: [testbed-manager] 2026-03-30 00:37:45.702511 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:37:45.702522 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:37:45.702532 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:37:45.702543 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:37:45.702553 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:37:45.702564 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:37:45.702575 | orchestrator | 2026-03-30 00:37:45.702586 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-30 00:37:45.702597 | orchestrator | Monday 30 March 2026 00:37:42 +0000 (0:00:13.998) 0:00:21.291 ********** 2026-03-30 00:37:45.702608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:37:45.702620 | orchestrator | 2026-03-30 00:37:45.702630 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-30 00:37:45.702641 | orchestrator | Monday 30 March 2026 00:37:43 +0000 (0:00:01.174) 0:00:22.466 ********** 2026-03-30 00:37:45.702651 | orchestrator | changed: [testbed-manager] 2026-03-30 00:37:45.702662 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:37:45.702673 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:37:45.702683 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:37:45.702694 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:37:45.702704 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:37:45.702715 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:37:45.702725 | orchestrator | 2026-03-30 00:37:45.702736 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:37:45.702747 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:37:45.702779 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:37:45.702801 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:37:45.702812 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:37:45.702830 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:37:45.702841 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:37:45.702852 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:37:45.702862 | orchestrator | 2026-03-30 00:37:45.702873 | orchestrator | 2026-03-30 00:37:45.702884 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:37:45.702894 | orchestrator | Monday 30 March 2026 00:37:45 +0000 (0:00:01.879) 0:00:24.345 ********** 2026-03-30 00:37:45.702905 | orchestrator | =============================================================================== 2026-03-30 00:37:45.702916 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.00s 2026-03-30 00:37:45.702926 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.61s 2026-03-30 00:37:45.702937 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2026-03-30 00:37:45.702947 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2026-03-30 00:37:45.702958 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-03-30 00:37:45.702968 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.11s 2026-03-30 00:37:45.702979 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.89s 2026-03-30 00:37:45.702989 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-03-30 00:37:45.703000 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.54s 2026-03-30 00:37:45.887836 | orchestrator | ++ semver latest 7.1.1 2026-03-30 00:37:45.941623 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 00:37:45.941714 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 00:37:45.941730 | orchestrator | + sudo systemctl restart manager.service 2026-03-30 00:37:59.764559 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-30 00:37:59.764638 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-30 00:37:59.764650 | orchestrator | + local max_attempts=60 2026-03-30 00:37:59.764660 | orchestrator | + local name=ceph-ansible 2026-03-30 00:37:59.764669 | orchestrator | + local attempt_num=1 2026-03-30 00:37:59.764679 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:37:59.798905 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:37:59.798987 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:37:59.799010 | orchestrator | + sleep 5 2026-03-30 00:38:04.803525 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:04.836765 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:04.836861 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:04.836876 | orchestrator | + sleep 5 2026-03-30 00:38:09.839462 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:09.871754 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:09.871859 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:09.871875 | orchestrator | + sleep 5 2026-03-30 00:38:14.875850 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:14.910650 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:14.910762 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:14.910779 | orchestrator | + sleep 5 2026-03-30 00:38:19.915427 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:19.955418 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:19.955504 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:19.955516 | orchestrator | + sleep 5 2026-03-30 00:38:24.960584 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:25.003971 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:25.004098 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:25.004116 | orchestrator | + sleep 5 2026-03-30 00:38:30.009810 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:30.048471 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:30.048568 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:30.048582 | orchestrator | + sleep 5 2026-03-30 00:38:35.052220 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:35.090879 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:35.090982 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:35.090997 | orchestrator | + sleep 5 2026-03-30 00:38:40.094965 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:40.131006 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:40.131157 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:40.131174 | orchestrator | + sleep 5 2026-03-30 00:38:45.135864 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:45.175016 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:45.175143 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:45.175160 | orchestrator | + sleep 5 2026-03-30 00:38:50.179960 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:50.220417 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:50.220526 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:50.220562 | orchestrator | + sleep 5 2026-03-30 00:38:55.225625 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:38:55.264322 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-30 00:38:55.264399 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:38:55.264407 | orchestrator | + sleep 5 2026-03-30 00:39:00.270210 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:39:00.313617 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-30 00:39:00.313702 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-30 00:39:00.313721 | orchestrator | + sleep 5 2026-03-30 00:39:05.318452 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-30 00:39:05.355593 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:39:05.355669 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-30 00:39:05.355678 | orchestrator | + local max_attempts=60 2026-03-30 00:39:05.355684 | orchestrator | + local name=kolla-ansible 2026-03-30 00:39:05.355689 | orchestrator | + local attempt_num=1 2026-03-30 00:39:05.356465 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-30 00:39:05.388576 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:39:05.388665 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-30 00:39:05.388680 | orchestrator | + local max_attempts=60 2026-03-30 00:39:05.388693 | orchestrator | + local name=osism-ansible 2026-03-30 00:39:05.388704 | orchestrator | + local attempt_num=1 2026-03-30 00:39:05.389528 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-30 00:39:05.428216 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-30 00:39:05.428308 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-30 00:39:05.428323 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-30 00:39:05.613422 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-30 00:39:05.756861 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-30 00:39:05.916415 | orchestrator | ARA in osism-ansible already disabled. 2026-03-30 00:39:06.068432 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-30 00:39:06.068966 | orchestrator | + osism apply gather-facts 2026-03-30 00:39:17.523728 | orchestrator | 2026-03-30 00:39:17 | INFO  | Prepare task for execution of gather-facts. 2026-03-30 00:39:17.592051 | orchestrator | 2026-03-30 00:39:17 | INFO  | Task 3ce7205c-f4c4-4c72-a970-b9a5acb7e548 (gather-facts) was prepared for execution. 2026-03-30 00:39:17.592182 | orchestrator | 2026-03-30 00:39:17 | INFO  | It takes a moment until task 3ce7205c-f4c4-4c72-a970-b9a5acb7e548 (gather-facts) has been started and output is visible here. 2026-03-30 00:39:28.513934 | orchestrator | 2026-03-30 00:39:28.514159 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-30 00:39:28.514179 | orchestrator | 2026-03-30 00:39:28.514191 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 00:39:28.514202 | orchestrator | Monday 30 March 2026 00:39:20 +0000 (0:00:00.314) 0:00:00.314 ********** 2026-03-30 00:39:28.514214 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:39:28.514226 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:39:28.514237 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:39:28.514248 | orchestrator | ok: [testbed-manager] 2026-03-30 00:39:28.514259 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:39:28.514269 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:39:28.514280 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:39:28.514291 | orchestrator | 2026-03-30 00:39:28.514303 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-30 00:39:28.514314 | orchestrator | 2026-03-30 00:39:28.514325 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-30 00:39:28.514336 | orchestrator | Monday 30 March 2026 00:39:27 +0000 (0:00:06.956) 0:00:07.270 ********** 2026-03-30 00:39:28.514347 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:39:28.514359 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:39:28.514369 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:39:28.514380 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:39:28.514391 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:39:28.514401 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:39:28.514412 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:39:28.514423 | orchestrator | 2026-03-30 00:39:28.514434 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:39:28.514445 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514457 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514469 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514481 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514494 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514506 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514518 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:39:28.514531 | orchestrator | 2026-03-30 00:39:28.514543 | orchestrator | 2026-03-30 00:39:28.514557 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:39:28.514569 | orchestrator | Monday 30 March 2026 00:39:28 +0000 (0:00:00.652) 0:00:07.923 ********** 2026-03-30 00:39:28.514582 | orchestrator | =============================================================================== 2026-03-30 00:39:28.514594 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.96s 2026-03-30 00:39:28.514607 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-03-30 00:39:28.716731 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-30 00:39:28.738794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-30 00:39:28.757153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-30 00:39:28.770325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-30 00:39:28.790485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-30 00:39:28.809084 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-30 00:39:28.832457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-30 00:39:28.851915 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-30 00:39:28.875060 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-30 00:39:28.894670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-30 00:39:28.913366 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-30 00:39:28.927593 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-30 00:39:28.940663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-30 00:39:28.958962 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-30 00:39:28.977098 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-30 00:39:28.996830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-30 00:39:29.017051 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-30 00:39:29.030955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-30 00:39:29.053454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-30 00:39:29.074531 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-30 00:39:29.104172 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-30 00:39:29.122160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-30 00:39:29.148867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-30 00:39:29.166251 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-30 00:39:29.317125 | orchestrator | ok: Runtime: 0:23:31.444013 2026-03-30 00:39:29.431714 | 2026-03-30 00:39:29.431910 | TASK [Deploy services] 2026-03-30 00:39:29.964592 | orchestrator | skipping: Conditional result was False 2026-03-30 00:39:29.983403 | 2026-03-30 00:39:29.983574 | TASK [Deploy in a nutshell] 2026-03-30 00:39:30.719990 | orchestrator | + set -e 2026-03-30 00:39:30.720136 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-30 00:39:30.720146 | orchestrator | ++ export INTERACTIVE=false 2026-03-30 00:39:30.720155 | orchestrator | ++ INTERACTIVE=false 2026-03-30 00:39:30.720161 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-30 00:39:30.720166 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-30 00:39:30.720172 | orchestrator | + source /opt/manager-vars.sh 2026-03-30 00:39:30.720196 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-30 00:39:30.720216 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-30 00:39:30.720222 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-30 00:39:30.720228 | orchestrator | ++ CEPH_VERSION=reef 2026-03-30 00:39:30.720232 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-30 00:39:30.720240 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-30 00:39:30.720244 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 00:39:30.720253 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 00:39:30.720256 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-30 00:39:30.720263 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-30 00:39:30.720267 | orchestrator | ++ export ARA=false 2026-03-30 00:39:30.720271 | orchestrator | ++ ARA=false 2026-03-30 00:39:30.720275 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-30 00:39:30.720279 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-30 00:39:30.720287 | orchestrator | ++ export TEMPEST=true 2026-03-30 00:39:30.720290 | orchestrator | ++ TEMPEST=true 2026-03-30 00:39:30.720294 | orchestrator | ++ export IS_ZUUL=true 2026-03-30 00:39:30.720298 | orchestrator | ++ IS_ZUUL=true 2026-03-30 00:39:30.720302 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:39:30.720306 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 00:39:30.720310 | orchestrator | ++ export EXTERNAL_API=false 2026-03-30 00:39:30.720317 | orchestrator | ++ EXTERNAL_API=false 2026-03-30 00:39:30.720323 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-30 00:39:30.720329 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-30 00:39:30.720338 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-30 00:39:30.720344 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-30 00:39:30.720350 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-30 00:39:30.720356 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-30 00:39:30.720362 | orchestrator | + echo 2026-03-30 00:39:30.720540 | orchestrator | 2026-03-30 00:39:30.720556 | orchestrator | # PULL IMAGES 2026-03-30 00:39:30.720560 | orchestrator | 2026-03-30 00:39:30.720564 | orchestrator | + echo '# PULL IMAGES' 2026-03-30 00:39:30.720568 | orchestrator | + echo 2026-03-30 00:39:30.722063 | orchestrator | ++ semver latest 7.0.0 2026-03-30 00:39:30.785078 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 00:39:30.785184 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 00:39:30.785223 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-30 00:39:32.165467 | orchestrator | 2026-03-30 00:39:32 | INFO  | Trying to run play pull-images in environment custom 2026-03-30 00:39:42.245072 | orchestrator | 2026-03-30 00:39:42 | INFO  | Prepare task for execution of pull-images. 2026-03-30 00:39:42.324060 | orchestrator | 2026-03-30 00:39:42 | INFO  | Task dd89e226-1244-4c71-8e7d-f59794e343b1 (pull-images) was prepared for execution. 2026-03-30 00:39:42.324187 | orchestrator | 2026-03-30 00:39:42 | INFO  | Task dd89e226-1244-4c71-8e7d-f59794e343b1 is running in background. No more output. Check ARA for logs. 2026-03-30 00:39:43.722344 | orchestrator | 2026-03-30 00:39:43 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-30 00:39:53.816920 | orchestrator | 2026-03-30 00:39:53 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-30 00:39:53.894318 | orchestrator | 2026-03-30 00:39:53 | INFO  | Task 05aeac7d-6693-454c-b339-db52d7cc0f69 (wipe-partitions) was prepared for execution. 2026-03-30 00:39:53.894417 | orchestrator | 2026-03-30 00:39:53 | INFO  | It takes a moment until task 05aeac7d-6693-454c-b339-db52d7cc0f69 (wipe-partitions) has been started and output is visible here. 2026-03-30 00:40:05.573421 | orchestrator | 2026-03-30 00:40:05.573497 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-30 00:40:05.573505 | orchestrator | 2026-03-30 00:40:05.573510 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-30 00:40:05.573521 | orchestrator | Monday 30 March 2026 00:39:57 +0000 (0:00:00.178) 0:00:00.178 ********** 2026-03-30 00:40:05.573542 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:40:05.573549 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:40:05.573553 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:40:05.573558 | orchestrator | 2026-03-30 00:40:05.573562 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-30 00:40:05.573567 | orchestrator | Monday 30 March 2026 00:39:58 +0000 (0:00:00.988) 0:00:01.166 ********** 2026-03-30 00:40:05.573574 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:05.573578 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:05.573583 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:40:05.573588 | orchestrator | 2026-03-30 00:40:05.573592 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-30 00:40:05.573597 | orchestrator | Monday 30 March 2026 00:39:58 +0000 (0:00:00.243) 0:00:01.410 ********** 2026-03-30 00:40:05.573601 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:05.573607 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:40:05.573611 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:40:05.573615 | orchestrator | 2026-03-30 00:40:05.573620 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-30 00:40:05.573624 | orchestrator | Monday 30 March 2026 00:39:58 +0000 (0:00:00.542) 0:00:01.952 ********** 2026-03-30 00:40:05.573629 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:05.573633 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:05.573637 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:40:05.573642 | orchestrator | 2026-03-30 00:40:05.573646 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-30 00:40:05.573650 | orchestrator | Monday 30 March 2026 00:39:59 +0000 (0:00:00.238) 0:00:02.191 ********** 2026-03-30 00:40:05.573655 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-30 00:40:05.573661 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-30 00:40:05.573666 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-30 00:40:05.573670 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-30 00:40:05.573675 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-30 00:40:05.573679 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-30 00:40:05.573683 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-30 00:40:05.573687 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-30 00:40:05.573692 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-30 00:40:05.573696 | orchestrator | 2026-03-30 00:40:05.573701 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-30 00:40:05.573705 | orchestrator | Monday 30 March 2026 00:40:00 +0000 (0:00:01.368) 0:00:03.560 ********** 2026-03-30 00:40:05.573710 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-30 00:40:05.573714 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-30 00:40:05.573718 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-30 00:40:05.573723 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-30 00:40:05.573727 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-30 00:40:05.573731 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-30 00:40:05.573735 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-30 00:40:05.573740 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-30 00:40:05.573744 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-30 00:40:05.573748 | orchestrator | 2026-03-30 00:40:05.573756 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-30 00:40:05.573760 | orchestrator | Monday 30 March 2026 00:40:01 +0000 (0:00:01.355) 0:00:04.915 ********** 2026-03-30 00:40:05.573765 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-30 00:40:05.573769 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-30 00:40:05.573773 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-30 00:40:05.573792 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-30 00:40:05.573809 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-30 00:40:05.573814 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-30 00:40:05.573818 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-30 00:40:05.573822 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-30 00:40:05.573826 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-30 00:40:05.573831 | orchestrator | 2026-03-30 00:40:05.573835 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-30 00:40:05.573839 | orchestrator | Monday 30 March 2026 00:40:03 +0000 (0:00:02.087) 0:00:07.003 ********** 2026-03-30 00:40:05.573844 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:40:05.573848 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:40:05.573852 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:40:05.573857 | orchestrator | 2026-03-30 00:40:05.573861 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-30 00:40:05.573865 | orchestrator | Monday 30 March 2026 00:40:04 +0000 (0:00:00.602) 0:00:07.606 ********** 2026-03-30 00:40:05.573870 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:40:05.573874 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:40:05.573878 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:40:05.573883 | orchestrator | 2026-03-30 00:40:05.573888 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:40:05.573893 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:05.573899 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:05.573914 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:05.573919 | orchestrator | 2026-03-30 00:40:05.573951 | orchestrator | 2026-03-30 00:40:05.573956 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:40:05.573960 | orchestrator | Monday 30 March 2026 00:40:05 +0000 (0:00:00.781) 0:00:08.387 ********** 2026-03-30 00:40:05.573964 | orchestrator | =============================================================================== 2026-03-30 00:40:05.573969 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.09s 2026-03-30 00:40:05.573975 | orchestrator | Check device availability ----------------------------------------------- 1.37s 2026-03-30 00:40:05.573980 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2026-03-30 00:40:05.573984 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.99s 2026-03-30 00:40:05.573990 | orchestrator | Request device events from the kernel ----------------------------------- 0.78s 2026-03-30 00:40:05.573994 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-03-30 00:40:05.573999 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-03-30 00:40:05.574004 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-03-30 00:40:05.574009 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-30 00:40:17.013654 | orchestrator | 2026-03-30 00:40:17 | INFO  | Prepare task for execution of facts. 2026-03-30 00:40:17.090962 | orchestrator | 2026-03-30 00:40:17 | INFO  | Task aaefd622-ad79-43f4-b8bd-c09e7515a11a (facts) was prepared for execution. 2026-03-30 00:40:17.091080 | orchestrator | 2026-03-30 00:40:17 | INFO  | It takes a moment until task aaefd622-ad79-43f4-b8bd-c09e7515a11a (facts) has been started and output is visible here. 2026-03-30 00:40:28.524134 | orchestrator | 2026-03-30 00:40:28.524292 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-30 00:40:28.524320 | orchestrator | 2026-03-30 00:40:28.524366 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-30 00:40:28.524379 | orchestrator | Monday 30 March 2026 00:40:20 +0000 (0:00:00.363) 0:00:00.364 ********** 2026-03-30 00:40:28.524390 | orchestrator | ok: [testbed-manager] 2026-03-30 00:40:28.524403 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:40:28.524414 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:40:28.524425 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:40:28.524436 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:28.524446 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:40:28.524457 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:40:28.524468 | orchestrator | 2026-03-30 00:40:28.524478 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-30 00:40:28.524489 | orchestrator | Monday 30 March 2026 00:40:21 +0000 (0:00:01.297) 0:00:01.661 ********** 2026-03-30 00:40:28.524500 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:40:28.524512 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:40:28.524523 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:40:28.524533 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:40:28.524544 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:28.524558 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:28.524577 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:40:28.524594 | orchestrator | 2026-03-30 00:40:28.524612 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-30 00:40:28.524691 | orchestrator | 2026-03-30 00:40:28.524710 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 00:40:28.524724 | orchestrator | Monday 30 March 2026 00:40:22 +0000 (0:00:01.068) 0:00:02.730 ********** 2026-03-30 00:40:28.524736 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:40:28.524747 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:40:28.524757 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:40:28.524768 | orchestrator | ok: [testbed-manager] 2026-03-30 00:40:28.524779 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:28.524790 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:40:28.524808 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:40:28.524827 | orchestrator | 2026-03-30 00:40:28.524845 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-30 00:40:28.524863 | orchestrator | 2026-03-30 00:40:28.524882 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-30 00:40:28.524927 | orchestrator | Monday 30 March 2026 00:40:27 +0000 (0:00:04.851) 0:00:07.582 ********** 2026-03-30 00:40:28.524945 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:40:28.524961 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:40:28.524972 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:40:28.524983 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:40:28.524996 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:28.525015 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:28.525033 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:40:28.525050 | orchestrator | 2026-03-30 00:40:28.525067 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:40:28.525088 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525108 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525128 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525146 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525164 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525190 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525201 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:40:28.525212 | orchestrator | 2026-03-30 00:40:28.525223 | orchestrator | 2026-03-30 00:40:28.525234 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:40:28.525244 | orchestrator | Monday 30 March 2026 00:40:28 +0000 (0:00:00.526) 0:00:08.109 ********** 2026-03-30 00:40:28.525255 | orchestrator | =============================================================================== 2026-03-30 00:40:28.525266 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-03-30 00:40:28.525277 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-03-30 00:40:28.525288 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2026-03-30 00:40:28.525299 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-03-30 00:40:30.018703 | orchestrator | 2026-03-30 00:40:30 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-30 00:40:30.085086 | orchestrator | 2026-03-30 00:40:30 | INFO  | Task 1ec7ddd6-38f0-4a47-b70c-69f26cd3bc11 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-30 00:40:30.085153 | orchestrator | 2026-03-30 00:40:30 | INFO  | It takes a moment until task 1ec7ddd6-38f0-4a47-b70c-69f26cd3bc11 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-30 00:40:41.119116 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-30 00:40:41.119198 | orchestrator | 2.16.14 2026-03-30 00:40:41.119209 | orchestrator | 2026-03-30 00:40:41.119217 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-30 00:40:41.119224 | orchestrator | 2026-03-30 00:40:41.119231 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 00:40:41.119238 | orchestrator | Monday 30 March 2026 00:40:34 +0000 (0:00:00.223) 0:00:00.223 ********** 2026-03-30 00:40:41.119245 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 00:40:41.119250 | orchestrator | 2026-03-30 00:40:41.119254 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-30 00:40:41.119258 | orchestrator | Monday 30 March 2026 00:40:34 +0000 (0:00:00.202) 0:00:00.426 ********** 2026-03-30 00:40:41.119263 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:41.119267 | orchestrator | 2026-03-30 00:40:41.119271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119275 | orchestrator | Monday 30 March 2026 00:40:34 +0000 (0:00:00.203) 0:00:00.629 ********** 2026-03-30 00:40:41.119286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-30 00:40:41.119290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-30 00:40:41.119295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-30 00:40:41.119301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-30 00:40:41.119306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-30 00:40:41.119313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-30 00:40:41.119318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-30 00:40:41.119324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-30 00:40:41.119330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-30 00:40:41.119337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-30 00:40:41.119356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-30 00:40:41.119360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-30 00:40:41.119364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-30 00:40:41.119367 | orchestrator | 2026-03-30 00:40:41.119371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119375 | orchestrator | Monday 30 March 2026 00:40:35 +0000 (0:00:00.344) 0:00:00.973 ********** 2026-03-30 00:40:41.119379 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119383 | orchestrator | 2026-03-30 00:40:41.119387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119390 | orchestrator | Monday 30 March 2026 00:40:35 +0000 (0:00:00.376) 0:00:01.349 ********** 2026-03-30 00:40:41.119394 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119398 | orchestrator | 2026-03-30 00:40:41.119402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119408 | orchestrator | Monday 30 March 2026 00:40:35 +0000 (0:00:00.193) 0:00:01.543 ********** 2026-03-30 00:40:41.119412 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119416 | orchestrator | 2026-03-30 00:40:41.119419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119423 | orchestrator | Monday 30 March 2026 00:40:35 +0000 (0:00:00.183) 0:00:01.726 ********** 2026-03-30 00:40:41.119427 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119431 | orchestrator | 2026-03-30 00:40:41.119435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119438 | orchestrator | Monday 30 March 2026 00:40:36 +0000 (0:00:00.148) 0:00:01.874 ********** 2026-03-30 00:40:41.119442 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119446 | orchestrator | 2026-03-30 00:40:41.119450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119454 | orchestrator | Monday 30 March 2026 00:40:36 +0000 (0:00:00.176) 0:00:02.051 ********** 2026-03-30 00:40:41.119457 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119461 | orchestrator | 2026-03-30 00:40:41.119465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119469 | orchestrator | Monday 30 March 2026 00:40:36 +0000 (0:00:00.174) 0:00:02.226 ********** 2026-03-30 00:40:41.119472 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119476 | orchestrator | 2026-03-30 00:40:41.119480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119484 | orchestrator | Monday 30 March 2026 00:40:36 +0000 (0:00:00.180) 0:00:02.406 ********** 2026-03-30 00:40:41.119487 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119491 | orchestrator | 2026-03-30 00:40:41.119495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119499 | orchestrator | Monday 30 March 2026 00:40:36 +0000 (0:00:00.205) 0:00:02.612 ********** 2026-03-30 00:40:41.119503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0) 2026-03-30 00:40:41.119508 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0) 2026-03-30 00:40:41.119512 | orchestrator | 2026-03-30 00:40:41.119515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119530 | orchestrator | Monday 30 March 2026 00:40:37 +0000 (0:00:00.385) 0:00:02.998 ********** 2026-03-30 00:40:41.119534 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543) 2026-03-30 00:40:41.119538 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543) 2026-03-30 00:40:41.119542 | orchestrator | 2026-03-30 00:40:41.119548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119556 | orchestrator | Monday 30 March 2026 00:40:37 +0000 (0:00:00.369) 0:00:03.367 ********** 2026-03-30 00:40:41.119560 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf) 2026-03-30 00:40:41.119563 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf) 2026-03-30 00:40:41.119567 | orchestrator | 2026-03-30 00:40:41.119571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119575 | orchestrator | Monday 30 March 2026 00:40:38 +0000 (0:00:00.561) 0:00:03.928 ********** 2026-03-30 00:40:41.119578 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f) 2026-03-30 00:40:41.119582 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f) 2026-03-30 00:40:41.119586 | orchestrator | 2026-03-30 00:40:41.119590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:41.119593 | orchestrator | Monday 30 March 2026 00:40:38 +0000 (0:00:00.542) 0:00:04.470 ********** 2026-03-30 00:40:41.119597 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-30 00:40:41.119601 | orchestrator | 2026-03-30 00:40:41.119605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119609 | orchestrator | Monday 30 March 2026 00:40:39 +0000 (0:00:00.723) 0:00:05.194 ********** 2026-03-30 00:40:41.119612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-30 00:40:41.119616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-30 00:40:41.119620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-30 00:40:41.119623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-30 00:40:41.119627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-30 00:40:41.119631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-30 00:40:41.119635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-30 00:40:41.119638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-30 00:40:41.119642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-30 00:40:41.119646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-30 00:40:41.119649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-30 00:40:41.119653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-30 00:40:41.119657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-30 00:40:41.119661 | orchestrator | 2026-03-30 00:40:41.119664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119669 | orchestrator | Monday 30 March 2026 00:40:39 +0000 (0:00:00.370) 0:00:05.564 ********** 2026-03-30 00:40:41.119675 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119681 | orchestrator | 2026-03-30 00:40:41.119687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119694 | orchestrator | Monday 30 March 2026 00:40:39 +0000 (0:00:00.200) 0:00:05.765 ********** 2026-03-30 00:40:41.119700 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119707 | orchestrator | 2026-03-30 00:40:41.119713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119721 | orchestrator | Monday 30 March 2026 00:40:40 +0000 (0:00:00.198) 0:00:05.964 ********** 2026-03-30 00:40:41.119728 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119738 | orchestrator | 2026-03-30 00:40:41.119745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119752 | orchestrator | Monday 30 March 2026 00:40:40 +0000 (0:00:00.194) 0:00:06.159 ********** 2026-03-30 00:40:41.119760 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119764 | orchestrator | 2026-03-30 00:40:41.119768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119773 | orchestrator | Monday 30 March 2026 00:40:40 +0000 (0:00:00.194) 0:00:06.354 ********** 2026-03-30 00:40:41.119777 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119781 | orchestrator | 2026-03-30 00:40:41.119786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119790 | orchestrator | Monday 30 March 2026 00:40:40 +0000 (0:00:00.199) 0:00:06.554 ********** 2026-03-30 00:40:41.119794 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119798 | orchestrator | 2026-03-30 00:40:41.119803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:41.119807 | orchestrator | Monday 30 March 2026 00:40:40 +0000 (0:00:00.182) 0:00:06.736 ********** 2026-03-30 00:40:41.119811 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:41.119815 | orchestrator | 2026-03-30 00:40:41.119823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:48.741561 | orchestrator | Monday 30 March 2026 00:40:41 +0000 (0:00:00.191) 0:00:06.928 ********** 2026-03-30 00:40:48.741641 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741651 | orchestrator | 2026-03-30 00:40:48.741657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:48.741663 | orchestrator | Monday 30 March 2026 00:40:41 +0000 (0:00:00.184) 0:00:07.112 ********** 2026-03-30 00:40:48.741668 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-30 00:40:48.741674 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-30 00:40:48.741680 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-30 00:40:48.741685 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-30 00:40:48.741690 | orchestrator | 2026-03-30 00:40:48.741695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:48.741714 | orchestrator | Monday 30 March 2026 00:40:42 +0000 (0:00:00.976) 0:00:08.088 ********** 2026-03-30 00:40:48.741720 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741725 | orchestrator | 2026-03-30 00:40:48.741730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:48.741735 | orchestrator | Monday 30 March 2026 00:40:42 +0000 (0:00:00.193) 0:00:08.283 ********** 2026-03-30 00:40:48.741740 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741745 | orchestrator | 2026-03-30 00:40:48.741750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:48.741755 | orchestrator | Monday 30 March 2026 00:40:42 +0000 (0:00:00.202) 0:00:08.485 ********** 2026-03-30 00:40:48.741760 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741765 | orchestrator | 2026-03-30 00:40:48.741770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:48.741775 | orchestrator | Monday 30 March 2026 00:40:42 +0000 (0:00:00.199) 0:00:08.685 ********** 2026-03-30 00:40:48.741780 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741785 | orchestrator | 2026-03-30 00:40:48.741790 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-30 00:40:48.741796 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.214) 0:00:08.900 ********** 2026-03-30 00:40:48.741801 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-30 00:40:48.741806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-30 00:40:48.741811 | orchestrator | 2026-03-30 00:40:48.741816 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-30 00:40:48.741821 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.171) 0:00:09.072 ********** 2026-03-30 00:40:48.741842 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741847 | orchestrator | 2026-03-30 00:40:48.741864 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-30 00:40:48.741903 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.129) 0:00:09.202 ********** 2026-03-30 00:40:48.741908 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741913 | orchestrator | 2026-03-30 00:40:48.741918 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-30 00:40:48.741923 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.131) 0:00:09.334 ********** 2026-03-30 00:40:48.741928 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.741933 | orchestrator | 2026-03-30 00:40:48.741938 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-30 00:40:48.741943 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.138) 0:00:09.472 ********** 2026-03-30 00:40:48.741948 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:48.741954 | orchestrator | 2026-03-30 00:40:48.741959 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-30 00:40:48.741964 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.140) 0:00:09.612 ********** 2026-03-30 00:40:48.741970 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}}) 2026-03-30 00:40:48.741976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'deb01b05-78a2-5c26-94fe-c042bb294237'}}) 2026-03-30 00:40:48.741981 | orchestrator | 2026-03-30 00:40:48.741986 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-30 00:40:48.741991 | orchestrator | Monday 30 March 2026 00:40:43 +0000 (0:00:00.171) 0:00:09.784 ********** 2026-03-30 00:40:48.741997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}})  2026-03-30 00:40:48.742007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'deb01b05-78a2-5c26-94fe-c042bb294237'}})  2026-03-30 00:40:48.742056 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742063 | orchestrator | 2026-03-30 00:40:48.742068 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-30 00:40:48.742073 | orchestrator | Monday 30 March 2026 00:40:44 +0000 (0:00:00.162) 0:00:09.947 ********** 2026-03-30 00:40:48.742079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}})  2026-03-30 00:40:48.742084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'deb01b05-78a2-5c26-94fe-c042bb294237'}})  2026-03-30 00:40:48.742089 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742094 | orchestrator | 2026-03-30 00:40:48.742099 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-30 00:40:48.742104 | orchestrator | Monday 30 March 2026 00:40:44 +0000 (0:00:00.348) 0:00:10.295 ********** 2026-03-30 00:40:48.742109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}})  2026-03-30 00:40:48.742126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'deb01b05-78a2-5c26-94fe-c042bb294237'}})  2026-03-30 00:40:48.742132 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742138 | orchestrator | 2026-03-30 00:40:48.742144 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-30 00:40:48.742150 | orchestrator | Monday 30 March 2026 00:40:44 +0000 (0:00:00.148) 0:00:10.444 ********** 2026-03-30 00:40:48.742155 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:48.742161 | orchestrator | 2026-03-30 00:40:48.742167 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-30 00:40:48.742173 | orchestrator | Monday 30 March 2026 00:40:44 +0000 (0:00:00.133) 0:00:10.577 ********** 2026-03-30 00:40:48.742178 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:40:48.742190 | orchestrator | 2026-03-30 00:40:48.742195 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-30 00:40:48.742201 | orchestrator | Monday 30 March 2026 00:40:44 +0000 (0:00:00.150) 0:00:10.727 ********** 2026-03-30 00:40:48.742207 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742212 | orchestrator | 2026-03-30 00:40:48.742218 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-30 00:40:48.742224 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.133) 0:00:10.860 ********** 2026-03-30 00:40:48.742230 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742235 | orchestrator | 2026-03-30 00:40:48.742241 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-30 00:40:48.742247 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.130) 0:00:10.991 ********** 2026-03-30 00:40:48.742252 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742258 | orchestrator | 2026-03-30 00:40:48.742263 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-30 00:40:48.742269 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.127) 0:00:11.118 ********** 2026-03-30 00:40:48.742275 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 00:40:48.742280 | orchestrator |  "ceph_osd_devices": { 2026-03-30 00:40:48.742286 | orchestrator |  "sdb": { 2026-03-30 00:40:48.742292 | orchestrator |  "osd_lvm_uuid": "8f4fd2da-a001-5de7-aa88-1349b3eb3c17" 2026-03-30 00:40:48.742298 | orchestrator |  }, 2026-03-30 00:40:48.742304 | orchestrator |  "sdc": { 2026-03-30 00:40:48.742310 | orchestrator |  "osd_lvm_uuid": "deb01b05-78a2-5c26-94fe-c042bb294237" 2026-03-30 00:40:48.742316 | orchestrator |  } 2026-03-30 00:40:48.742321 | orchestrator |  } 2026-03-30 00:40:48.742327 | orchestrator | } 2026-03-30 00:40:48.742333 | orchestrator | 2026-03-30 00:40:48.742338 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-30 00:40:48.742344 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.143) 0:00:11.262 ********** 2026-03-30 00:40:48.742350 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742355 | orchestrator | 2026-03-30 00:40:48.742361 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-30 00:40:48.742367 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.128) 0:00:11.391 ********** 2026-03-30 00:40:48.742372 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742378 | orchestrator | 2026-03-30 00:40:48.742384 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-30 00:40:48.742389 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.126) 0:00:11.518 ********** 2026-03-30 00:40:48.742395 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:40:48.742400 | orchestrator | 2026-03-30 00:40:48.742406 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-30 00:40:48.742412 | orchestrator | Monday 30 March 2026 00:40:45 +0000 (0:00:00.136) 0:00:11.655 ********** 2026-03-30 00:40:48.742417 | orchestrator | changed: [testbed-node-3] => { 2026-03-30 00:40:48.742423 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-30 00:40:48.742429 | orchestrator |  "ceph_osd_devices": { 2026-03-30 00:40:48.742435 | orchestrator |  "sdb": { 2026-03-30 00:40:48.742440 | orchestrator |  "osd_lvm_uuid": "8f4fd2da-a001-5de7-aa88-1349b3eb3c17" 2026-03-30 00:40:48.742446 | orchestrator |  }, 2026-03-30 00:40:48.742452 | orchestrator |  "sdc": { 2026-03-30 00:40:48.742457 | orchestrator |  "osd_lvm_uuid": "deb01b05-78a2-5c26-94fe-c042bb294237" 2026-03-30 00:40:48.742463 | orchestrator |  } 2026-03-30 00:40:48.742469 | orchestrator |  }, 2026-03-30 00:40:48.742474 | orchestrator |  "lvm_volumes": [ 2026-03-30 00:40:48.742480 | orchestrator |  { 2026-03-30 00:40:48.742486 | orchestrator |  "data": "osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17", 2026-03-30 00:40:48.742491 | orchestrator |  "data_vg": "ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17" 2026-03-30 00:40:48.742500 | orchestrator |  }, 2026-03-30 00:40:48.742505 | orchestrator |  { 2026-03-30 00:40:48.742510 | orchestrator |  "data": "osd-block-deb01b05-78a2-5c26-94fe-c042bb294237", 2026-03-30 00:40:48.742515 | orchestrator |  "data_vg": "ceph-deb01b05-78a2-5c26-94fe-c042bb294237" 2026-03-30 00:40:48.742520 | orchestrator |  } 2026-03-30 00:40:48.742525 | orchestrator |  ] 2026-03-30 00:40:48.742531 | orchestrator |  } 2026-03-30 00:40:48.742536 | orchestrator | } 2026-03-30 00:40:48.742541 | orchestrator | 2026-03-30 00:40:48.742546 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-30 00:40:48.742551 | orchestrator | Monday 30 March 2026 00:40:46 +0000 (0:00:00.192) 0:00:11.848 ********** 2026-03-30 00:40:48.742556 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 00:40:48.742561 | orchestrator | 2026-03-30 00:40:48.742566 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-30 00:40:48.742571 | orchestrator | 2026-03-30 00:40:48.742576 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 00:40:48.742581 | orchestrator | Monday 30 March 2026 00:40:48 +0000 (0:00:02.214) 0:00:14.063 ********** 2026-03-30 00:40:48.742586 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-30 00:40:48.742591 | orchestrator | 2026-03-30 00:40:48.742596 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-30 00:40:48.742601 | orchestrator | Monday 30 March 2026 00:40:48 +0000 (0:00:00.249) 0:00:14.312 ********** 2026-03-30 00:40:48.742606 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:40:48.742611 | orchestrator | 2026-03-30 00:40:48.742619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.553946 | orchestrator | Monday 30 March 2026 00:40:48 +0000 (0:00:00.236) 0:00:14.549 ********** 2026-03-30 00:40:56.554111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-30 00:40:56.554133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-30 00:40:56.554148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-30 00:40:56.554163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-30 00:40:56.554177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-30 00:40:56.554191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-30 00:40:56.554205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-30 00:40:56.554224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-30 00:40:56.554239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-30 00:40:56.554254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-30 00:40:56.554269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-30 00:40:56.554283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-30 00:40:56.554318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-30 00:40:56.554332 | orchestrator | 2026-03-30 00:40:56.554347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554361 | orchestrator | Monday 30 March 2026 00:40:49 +0000 (0:00:00.366) 0:00:14.915 ********** 2026-03-30 00:40:56.554375 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554390 | orchestrator | 2026-03-30 00:40:56.554404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554418 | orchestrator | Monday 30 March 2026 00:40:49 +0000 (0:00:00.195) 0:00:15.111 ********** 2026-03-30 00:40:56.554457 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554471 | orchestrator | 2026-03-30 00:40:56.554485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554500 | orchestrator | Monday 30 March 2026 00:40:49 +0000 (0:00:00.182) 0:00:15.294 ********** 2026-03-30 00:40:56.554514 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554527 | orchestrator | 2026-03-30 00:40:56.554541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554555 | orchestrator | Monday 30 March 2026 00:40:49 +0000 (0:00:00.194) 0:00:15.488 ********** 2026-03-30 00:40:56.554569 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554582 | orchestrator | 2026-03-30 00:40:56.554596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554610 | orchestrator | Monday 30 March 2026 00:40:49 +0000 (0:00:00.197) 0:00:15.685 ********** 2026-03-30 00:40:56.554624 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554638 | orchestrator | 2026-03-30 00:40:56.554651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554665 | orchestrator | Monday 30 March 2026 00:40:50 +0000 (0:00:00.598) 0:00:16.283 ********** 2026-03-30 00:40:56.554679 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554692 | orchestrator | 2026-03-30 00:40:56.554704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554717 | orchestrator | Monday 30 March 2026 00:40:50 +0000 (0:00:00.186) 0:00:16.469 ********** 2026-03-30 00:40:56.554730 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554743 | orchestrator | 2026-03-30 00:40:56.554756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554769 | orchestrator | Monday 30 March 2026 00:40:50 +0000 (0:00:00.193) 0:00:16.663 ********** 2026-03-30 00:40:56.554781 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.554794 | orchestrator | 2026-03-30 00:40:56.554807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554820 | orchestrator | Monday 30 March 2026 00:40:51 +0000 (0:00:00.194) 0:00:16.857 ********** 2026-03-30 00:40:56.554833 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf) 2026-03-30 00:40:56.554847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf) 2026-03-30 00:40:56.554900 | orchestrator | 2026-03-30 00:40:56.554915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554929 | orchestrator | Monday 30 March 2026 00:40:51 +0000 (0:00:00.409) 0:00:17.266 ********** 2026-03-30 00:40:56.554943 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a) 2026-03-30 00:40:56.554957 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a) 2026-03-30 00:40:56.554970 | orchestrator | 2026-03-30 00:40:56.554984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.554998 | orchestrator | Monday 30 March 2026 00:40:51 +0000 (0:00:00.433) 0:00:17.700 ********** 2026-03-30 00:40:56.555011 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec) 2026-03-30 00:40:56.555025 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec) 2026-03-30 00:40:56.555039 | orchestrator | 2026-03-30 00:40:56.555053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.555085 | orchestrator | Monday 30 March 2026 00:40:52 +0000 (0:00:00.443) 0:00:18.145 ********** 2026-03-30 00:40:56.555099 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a) 2026-03-30 00:40:56.555113 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a) 2026-03-30 00:40:56.555127 | orchestrator | 2026-03-30 00:40:56.555155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:40:56.555169 | orchestrator | Monday 30 March 2026 00:40:52 +0000 (0:00:00.431) 0:00:18.576 ********** 2026-03-30 00:40:56.555182 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-30 00:40:56.555196 | orchestrator | 2026-03-30 00:40:56.555209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555223 | orchestrator | Monday 30 March 2026 00:40:53 +0000 (0:00:00.349) 0:00:18.926 ********** 2026-03-30 00:40:56.555237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-30 00:40:56.555251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-30 00:40:56.555271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-30 00:40:56.555284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-30 00:40:56.555298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-30 00:40:56.555310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-30 00:40:56.555323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-30 00:40:56.555336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-30 00:40:56.555349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-30 00:40:56.555363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-30 00:40:56.555377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-30 00:40:56.555390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-30 00:40:56.555404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-30 00:40:56.555417 | orchestrator | 2026-03-30 00:40:56.555431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555445 | orchestrator | Monday 30 March 2026 00:40:53 +0000 (0:00:00.408) 0:00:19.335 ********** 2026-03-30 00:40:56.555458 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555472 | orchestrator | 2026-03-30 00:40:56.555486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555500 | orchestrator | Monday 30 March 2026 00:40:53 +0000 (0:00:00.224) 0:00:19.560 ********** 2026-03-30 00:40:56.555513 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555527 | orchestrator | 2026-03-30 00:40:56.555541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555555 | orchestrator | Monday 30 March 2026 00:40:54 +0000 (0:00:00.720) 0:00:20.280 ********** 2026-03-30 00:40:56.555568 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555582 | orchestrator | 2026-03-30 00:40:56.555595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555609 | orchestrator | Monday 30 March 2026 00:40:54 +0000 (0:00:00.216) 0:00:20.496 ********** 2026-03-30 00:40:56.555623 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555636 | orchestrator | 2026-03-30 00:40:56.555650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555664 | orchestrator | Monday 30 March 2026 00:40:54 +0000 (0:00:00.203) 0:00:20.700 ********** 2026-03-30 00:40:56.555678 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555691 | orchestrator | 2026-03-30 00:40:56.555705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555718 | orchestrator | Monday 30 March 2026 00:40:55 +0000 (0:00:00.217) 0:00:20.917 ********** 2026-03-30 00:40:56.555732 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555755 | orchestrator | 2026-03-30 00:40:56.555768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555782 | orchestrator | Monday 30 March 2026 00:40:55 +0000 (0:00:00.186) 0:00:21.104 ********** 2026-03-30 00:40:56.555796 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555809 | orchestrator | 2026-03-30 00:40:56.555823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555837 | orchestrator | Monday 30 March 2026 00:40:55 +0000 (0:00:00.219) 0:00:21.323 ********** 2026-03-30 00:40:56.555850 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:40:56.555883 | orchestrator | 2026-03-30 00:40:56.555896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.555909 | orchestrator | Monday 30 March 2026 00:40:55 +0000 (0:00:00.251) 0:00:21.574 ********** 2026-03-30 00:40:56.555923 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-30 00:40:56.555938 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-30 00:40:56.555952 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-30 00:40:56.555966 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-30 00:40:56.555979 | orchestrator | 2026-03-30 00:40:56.555993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:40:56.556007 | orchestrator | Monday 30 March 2026 00:40:56 +0000 (0:00:00.676) 0:00:22.251 ********** 2026-03-30 00:40:56.556021 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099257 | orchestrator | 2026-03-30 00:41:02.099337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:02.099348 | orchestrator | Monday 30 March 2026 00:40:56 +0000 (0:00:00.199) 0:00:22.450 ********** 2026-03-30 00:41:02.099354 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099362 | orchestrator | 2026-03-30 00:41:02.099367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:02.099373 | orchestrator | Monday 30 March 2026 00:40:56 +0000 (0:00:00.194) 0:00:22.644 ********** 2026-03-30 00:41:02.099379 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099385 | orchestrator | 2026-03-30 00:41:02.099390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:02.099396 | orchestrator | Monday 30 March 2026 00:40:57 +0000 (0:00:00.180) 0:00:22.825 ********** 2026-03-30 00:41:02.099401 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099407 | orchestrator | 2026-03-30 00:41:02.099412 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-30 00:41:02.099418 | orchestrator | Monday 30 March 2026 00:40:57 +0000 (0:00:00.202) 0:00:23.027 ********** 2026-03-30 00:41:02.099424 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-30 00:41:02.099430 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-30 00:41:02.099436 | orchestrator | 2026-03-30 00:41:02.099441 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-30 00:41:02.099463 | orchestrator | Monday 30 March 2026 00:40:57 +0000 (0:00:00.311) 0:00:23.339 ********** 2026-03-30 00:41:02.099468 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099474 | orchestrator | 2026-03-30 00:41:02.099480 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-30 00:41:02.099485 | orchestrator | Monday 30 March 2026 00:40:57 +0000 (0:00:00.265) 0:00:23.604 ********** 2026-03-30 00:41:02.099491 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099496 | orchestrator | 2026-03-30 00:41:02.099502 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-30 00:41:02.099511 | orchestrator | Monday 30 March 2026 00:40:57 +0000 (0:00:00.107) 0:00:23.712 ********** 2026-03-30 00:41:02.099516 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099522 | orchestrator | 2026-03-30 00:41:02.099557 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-30 00:41:02.099563 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.131) 0:00:23.843 ********** 2026-03-30 00:41:02.099587 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:41:02.099594 | orchestrator | 2026-03-30 00:41:02.099600 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-30 00:41:02.099605 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.143) 0:00:23.986 ********** 2026-03-30 00:41:02.099611 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e5d1498-d7a5-5a93-a004-d1785e71aab2'}}) 2026-03-30 00:41:02.099617 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae410091-a002-50e8-b50c-29c9b1a933c3'}}) 2026-03-30 00:41:02.099622 | orchestrator | 2026-03-30 00:41:02.099628 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-30 00:41:02.099633 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.127) 0:00:24.114 ********** 2026-03-30 00:41:02.099640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e5d1498-d7a5-5a93-a004-d1785e71aab2'}})  2026-03-30 00:41:02.099647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae410091-a002-50e8-b50c-29c9b1a933c3'}})  2026-03-30 00:41:02.099653 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099658 | orchestrator | 2026-03-30 00:41:02.099664 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-30 00:41:02.099669 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.117) 0:00:24.231 ********** 2026-03-30 00:41:02.099674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e5d1498-d7a5-5a93-a004-d1785e71aab2'}})  2026-03-30 00:41:02.099680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae410091-a002-50e8-b50c-29c9b1a933c3'}})  2026-03-30 00:41:02.099686 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099692 | orchestrator | 2026-03-30 00:41:02.099697 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-30 00:41:02.099703 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.121) 0:00:24.353 ********** 2026-03-30 00:41:02.099708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e5d1498-d7a5-5a93-a004-d1785e71aab2'}})  2026-03-30 00:41:02.099714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae410091-a002-50e8-b50c-29c9b1a933c3'}})  2026-03-30 00:41:02.099719 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099725 | orchestrator | 2026-03-30 00:41:02.099730 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-30 00:41:02.099736 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.119) 0:00:24.473 ********** 2026-03-30 00:41:02.099741 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:41:02.099747 | orchestrator | 2026-03-30 00:41:02.099752 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-30 00:41:02.099757 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.116) 0:00:24.589 ********** 2026-03-30 00:41:02.099763 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:41:02.099768 | orchestrator | 2026-03-30 00:41:02.099773 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-30 00:41:02.099779 | orchestrator | Monday 30 March 2026 00:40:58 +0000 (0:00:00.118) 0:00:24.707 ********** 2026-03-30 00:41:02.099797 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099804 | orchestrator | 2026-03-30 00:41:02.099811 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-30 00:41:02.099817 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.121) 0:00:24.828 ********** 2026-03-30 00:41:02.099824 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099830 | orchestrator | 2026-03-30 00:41:02.099837 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-30 00:41:02.099843 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.271) 0:00:25.100 ********** 2026-03-30 00:41:02.099874 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099886 | orchestrator | 2026-03-30 00:41:02.099893 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-30 00:41:02.099899 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.104) 0:00:25.205 ********** 2026-03-30 00:41:02.099906 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 00:41:02.099913 | orchestrator |  "ceph_osd_devices": { 2026-03-30 00:41:02.099919 | orchestrator |  "sdb": { 2026-03-30 00:41:02.099926 | orchestrator |  "osd_lvm_uuid": "3e5d1498-d7a5-5a93-a004-d1785e71aab2" 2026-03-30 00:41:02.099933 | orchestrator |  }, 2026-03-30 00:41:02.099939 | orchestrator |  "sdc": { 2026-03-30 00:41:02.099945 | orchestrator |  "osd_lvm_uuid": "ae410091-a002-50e8-b50c-29c9b1a933c3" 2026-03-30 00:41:02.099952 | orchestrator |  } 2026-03-30 00:41:02.099958 | orchestrator |  } 2026-03-30 00:41:02.099965 | orchestrator | } 2026-03-30 00:41:02.099972 | orchestrator | 2026-03-30 00:41:02.099978 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-30 00:41:02.099984 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.138) 0:00:25.344 ********** 2026-03-30 00:41:02.099991 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.099997 | orchestrator | 2026-03-30 00:41:02.100003 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-30 00:41:02.100010 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.115) 0:00:25.459 ********** 2026-03-30 00:41:02.100016 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.100023 | orchestrator | 2026-03-30 00:41:02.100029 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-30 00:41:02.100035 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.112) 0:00:25.572 ********** 2026-03-30 00:41:02.100042 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:41:02.100048 | orchestrator | 2026-03-30 00:41:02.100054 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-30 00:41:02.100079 | orchestrator | Monday 30 March 2026 00:40:59 +0000 (0:00:00.120) 0:00:25.693 ********** 2026-03-30 00:41:02.100087 | orchestrator | changed: [testbed-node-4] => { 2026-03-30 00:41:02.100093 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-30 00:41:02.100100 | orchestrator |  "ceph_osd_devices": { 2026-03-30 00:41:02.100106 | orchestrator |  "sdb": { 2026-03-30 00:41:02.100112 | orchestrator |  "osd_lvm_uuid": "3e5d1498-d7a5-5a93-a004-d1785e71aab2" 2026-03-30 00:41:02.100119 | orchestrator |  }, 2026-03-30 00:41:02.100125 | orchestrator |  "sdc": { 2026-03-30 00:41:02.100132 | orchestrator |  "osd_lvm_uuid": "ae410091-a002-50e8-b50c-29c9b1a933c3" 2026-03-30 00:41:02.100138 | orchestrator |  } 2026-03-30 00:41:02.100144 | orchestrator |  }, 2026-03-30 00:41:02.100151 | orchestrator |  "lvm_volumes": [ 2026-03-30 00:41:02.100157 | orchestrator |  { 2026-03-30 00:41:02.100164 | orchestrator |  "data": "osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2", 2026-03-30 00:41:02.100170 | orchestrator |  "data_vg": "ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2" 2026-03-30 00:41:02.100176 | orchestrator |  }, 2026-03-30 00:41:02.100183 | orchestrator |  { 2026-03-30 00:41:02.100189 | orchestrator |  "data": "osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3", 2026-03-30 00:41:02.100195 | orchestrator |  "data_vg": "ceph-ae410091-a002-50e8-b50c-29c9b1a933c3" 2026-03-30 00:41:02.100200 | orchestrator |  } 2026-03-30 00:41:02.100206 | orchestrator |  ] 2026-03-30 00:41:02.100211 | orchestrator |  } 2026-03-30 00:41:02.100217 | orchestrator | } 2026-03-30 00:41:02.100222 | orchestrator | 2026-03-30 00:41:02.100228 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-30 00:41:02.100233 | orchestrator | Monday 30 March 2026 00:41:00 +0000 (0:00:00.182) 0:00:25.876 ********** 2026-03-30 00:41:02.100238 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-30 00:41:02.100244 | orchestrator | 2026-03-30 00:41:02.100254 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-30 00:41:02.100259 | orchestrator | 2026-03-30 00:41:02.100265 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 00:41:02.100270 | orchestrator | Monday 30 March 2026 00:41:01 +0000 (0:00:00.941) 0:00:26.817 ********** 2026-03-30 00:41:02.100276 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-30 00:41:02.100281 | orchestrator | 2026-03-30 00:41:02.100287 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-30 00:41:02.100292 | orchestrator | Monday 30 March 2026 00:41:01 +0000 (0:00:00.349) 0:00:27.167 ********** 2026-03-30 00:41:02.100297 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:41:02.100303 | orchestrator | 2026-03-30 00:41:02.100308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:02.100314 | orchestrator | Monday 30 March 2026 00:41:01 +0000 (0:00:00.465) 0:00:27.633 ********** 2026-03-30 00:41:02.100319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-30 00:41:02.100325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-30 00:41:02.100330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-30 00:41:02.100336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-30 00:41:02.100341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-30 00:41:02.100350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-30 00:41:09.808528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-30 00:41:09.808612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-30 00:41:09.808622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-30 00:41:09.808630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-30 00:41:09.808637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-30 00:41:09.808643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-30 00:41:09.808649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-30 00:41:09.808656 | orchestrator | 2026-03-30 00:41:09.808664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808671 | orchestrator | Monday 30 March 2026 00:41:02 +0000 (0:00:00.357) 0:00:27.990 ********** 2026-03-30 00:41:09.808678 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808685 | orchestrator | 2026-03-30 00:41:09.808691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808698 | orchestrator | Monday 30 March 2026 00:41:02 +0000 (0:00:00.194) 0:00:28.184 ********** 2026-03-30 00:41:09.808704 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808715 | orchestrator | 2026-03-30 00:41:09.808725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808735 | orchestrator | Monday 30 March 2026 00:41:02 +0000 (0:00:00.236) 0:00:28.421 ********** 2026-03-30 00:41:09.808745 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808755 | orchestrator | 2026-03-30 00:41:09.808766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808776 | orchestrator | Monday 30 March 2026 00:41:02 +0000 (0:00:00.178) 0:00:28.600 ********** 2026-03-30 00:41:09.808786 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808797 | orchestrator | 2026-03-30 00:41:09.808806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808813 | orchestrator | Monday 30 March 2026 00:41:02 +0000 (0:00:00.176) 0:00:28.776 ********** 2026-03-30 00:41:09.808838 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808884 | orchestrator | 2026-03-30 00:41:09.808890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808897 | orchestrator | Monday 30 March 2026 00:41:03 +0000 (0:00:00.180) 0:00:28.956 ********** 2026-03-30 00:41:09.808903 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808909 | orchestrator | 2026-03-30 00:41:09.808915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808922 | orchestrator | Monday 30 March 2026 00:41:03 +0000 (0:00:00.212) 0:00:29.169 ********** 2026-03-30 00:41:09.808928 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808934 | orchestrator | 2026-03-30 00:41:09.808940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808947 | orchestrator | Monday 30 March 2026 00:41:03 +0000 (0:00:00.172) 0:00:29.342 ********** 2026-03-30 00:41:09.808953 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.808959 | orchestrator | 2026-03-30 00:41:09.808965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.808971 | orchestrator | Monday 30 March 2026 00:41:03 +0000 (0:00:00.185) 0:00:29.527 ********** 2026-03-30 00:41:09.808978 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0) 2026-03-30 00:41:09.808985 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0) 2026-03-30 00:41:09.808991 | orchestrator | 2026-03-30 00:41:09.808998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.809004 | orchestrator | Monday 30 March 2026 00:41:04 +0000 (0:00:00.553) 0:00:30.081 ********** 2026-03-30 00:41:09.809022 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c) 2026-03-30 00:41:09.809029 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c) 2026-03-30 00:41:09.809035 | orchestrator | 2026-03-30 00:41:09.809041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.809047 | orchestrator | Monday 30 March 2026 00:41:05 +0000 (0:00:00.811) 0:00:30.892 ********** 2026-03-30 00:41:09.809053 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74) 2026-03-30 00:41:09.809060 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74) 2026-03-30 00:41:09.809066 | orchestrator | 2026-03-30 00:41:09.809072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.809078 | orchestrator | Monday 30 March 2026 00:41:05 +0000 (0:00:00.421) 0:00:31.313 ********** 2026-03-30 00:41:09.809085 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57) 2026-03-30 00:41:09.809093 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57) 2026-03-30 00:41:09.809100 | orchestrator | 2026-03-30 00:41:09.809108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:41:09.809115 | orchestrator | Monday 30 March 2026 00:41:05 +0000 (0:00:00.409) 0:00:31.723 ********** 2026-03-30 00:41:09.809122 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-30 00:41:09.809129 | orchestrator | 2026-03-30 00:41:09.809137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809157 | orchestrator | Monday 30 March 2026 00:41:06 +0000 (0:00:00.331) 0:00:32.055 ********** 2026-03-30 00:41:09.809165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-30 00:41:09.809172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-30 00:41:09.809180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-30 00:41:09.809187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-30 00:41:09.809200 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-30 00:41:09.809207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-30 00:41:09.809214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-30 00:41:09.809222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-30 00:41:09.809229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-30 00:41:09.809236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-30 00:41:09.809243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-30 00:41:09.809250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-30 00:41:09.809257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-30 00:41:09.809265 | orchestrator | 2026-03-30 00:41:09.809272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809279 | orchestrator | Monday 30 March 2026 00:41:06 +0000 (0:00:00.316) 0:00:32.371 ********** 2026-03-30 00:41:09.809286 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809293 | orchestrator | 2026-03-30 00:41:09.809300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809308 | orchestrator | Monday 30 March 2026 00:41:06 +0000 (0:00:00.179) 0:00:32.550 ********** 2026-03-30 00:41:09.809315 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809379 | orchestrator | 2026-03-30 00:41:09.809387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809394 | orchestrator | Monday 30 March 2026 00:41:06 +0000 (0:00:00.172) 0:00:32.723 ********** 2026-03-30 00:41:09.809401 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809408 | orchestrator | 2026-03-30 00:41:09.809416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809423 | orchestrator | Monday 30 March 2026 00:41:07 +0000 (0:00:00.173) 0:00:32.896 ********** 2026-03-30 00:41:09.809431 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809438 | orchestrator | 2026-03-30 00:41:09.809444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809451 | orchestrator | Monday 30 March 2026 00:41:07 +0000 (0:00:00.174) 0:00:33.070 ********** 2026-03-30 00:41:09.809457 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809463 | orchestrator | 2026-03-30 00:41:09.809469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809475 | orchestrator | Monday 30 March 2026 00:41:07 +0000 (0:00:00.173) 0:00:33.244 ********** 2026-03-30 00:41:09.809481 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809488 | orchestrator | 2026-03-30 00:41:09.809494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809500 | orchestrator | Monday 30 March 2026 00:41:07 +0000 (0:00:00.494) 0:00:33.738 ********** 2026-03-30 00:41:09.809506 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809512 | orchestrator | 2026-03-30 00:41:09.809518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809524 | orchestrator | Monday 30 March 2026 00:41:08 +0000 (0:00:00.225) 0:00:33.964 ********** 2026-03-30 00:41:09.809531 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809537 | orchestrator | 2026-03-30 00:41:09.809543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809549 | orchestrator | Monday 30 March 2026 00:41:08 +0000 (0:00:00.232) 0:00:34.196 ********** 2026-03-30 00:41:09.809555 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-30 00:41:09.809567 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-30 00:41:09.809574 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-30 00:41:09.809580 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-30 00:41:09.809587 | orchestrator | 2026-03-30 00:41:09.809593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809599 | orchestrator | Monday 30 March 2026 00:41:09 +0000 (0:00:00.744) 0:00:34.941 ********** 2026-03-30 00:41:09.809605 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809612 | orchestrator | 2026-03-30 00:41:09.809618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809624 | orchestrator | Monday 30 March 2026 00:41:09 +0000 (0:00:00.199) 0:00:35.140 ********** 2026-03-30 00:41:09.809630 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809636 | orchestrator | 2026-03-30 00:41:09.809643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809649 | orchestrator | Monday 30 March 2026 00:41:09 +0000 (0:00:00.158) 0:00:35.299 ********** 2026-03-30 00:41:09.809655 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809661 | orchestrator | 2026-03-30 00:41:09.809667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:41:09.809673 | orchestrator | Monday 30 March 2026 00:41:09 +0000 (0:00:00.160) 0:00:35.460 ********** 2026-03-30 00:41:09.809680 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:09.809686 | orchestrator | 2026-03-30 00:41:09.809696 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-30 00:41:13.441494 | orchestrator | Monday 30 March 2026 00:41:09 +0000 (0:00:00.162) 0:00:35.623 ********** 2026-03-30 00:41:13.441602 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-30 00:41:13.441617 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-30 00:41:13.441630 | orchestrator | 2026-03-30 00:41:13.441642 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-30 00:41:13.441653 | orchestrator | Monday 30 March 2026 00:41:09 +0000 (0:00:00.145) 0:00:35.768 ********** 2026-03-30 00:41:13.441665 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.441676 | orchestrator | 2026-03-30 00:41:13.441687 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-30 00:41:13.441698 | orchestrator | Monday 30 March 2026 00:41:10 +0000 (0:00:00.111) 0:00:35.879 ********** 2026-03-30 00:41:13.441730 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.441742 | orchestrator | 2026-03-30 00:41:13.441753 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-30 00:41:13.441763 | orchestrator | Monday 30 March 2026 00:41:10 +0000 (0:00:00.131) 0:00:36.011 ********** 2026-03-30 00:41:13.441774 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.441785 | orchestrator | 2026-03-30 00:41:13.441796 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-30 00:41:13.441808 | orchestrator | Monday 30 March 2026 00:41:10 +0000 (0:00:00.178) 0:00:36.190 ********** 2026-03-30 00:41:13.441819 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:41:13.441830 | orchestrator | 2026-03-30 00:41:13.441893 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-30 00:41:13.441905 | orchestrator | Monday 30 March 2026 00:41:10 +0000 (0:00:00.241) 0:00:36.431 ********** 2026-03-30 00:41:13.441916 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}}) 2026-03-30 00:41:13.441934 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5c90778-4ce0-5f2b-bfca-518c358a14f4'}}) 2026-03-30 00:41:13.441945 | orchestrator | 2026-03-30 00:41:13.441956 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-30 00:41:13.441967 | orchestrator | Monday 30 March 2026 00:41:10 +0000 (0:00:00.141) 0:00:36.573 ********** 2026-03-30 00:41:13.441978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}})  2026-03-30 00:41:13.442113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5c90778-4ce0-5f2b-bfca-518c358a14f4'}})  2026-03-30 00:41:13.442130 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442142 | orchestrator | 2026-03-30 00:41:13.442155 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-30 00:41:13.442167 | orchestrator | Monday 30 March 2026 00:41:10 +0000 (0:00:00.133) 0:00:36.706 ********** 2026-03-30 00:41:13.442180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}})  2026-03-30 00:41:13.442192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5c90778-4ce0-5f2b-bfca-518c358a14f4'}})  2026-03-30 00:41:13.442204 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442217 | orchestrator | 2026-03-30 00:41:13.442229 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-30 00:41:13.442241 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.146) 0:00:36.853 ********** 2026-03-30 00:41:13.442254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}})  2026-03-30 00:41:13.442267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5c90778-4ce0-5f2b-bfca-518c358a14f4'}})  2026-03-30 00:41:13.442278 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442290 | orchestrator | 2026-03-30 00:41:13.442302 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-30 00:41:13.442314 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.133) 0:00:36.986 ********** 2026-03-30 00:41:13.442327 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:41:13.442339 | orchestrator | 2026-03-30 00:41:13.442351 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-30 00:41:13.442363 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.111) 0:00:37.098 ********** 2026-03-30 00:41:13.442375 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:41:13.442387 | orchestrator | 2026-03-30 00:41:13.442398 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-30 00:41:13.442409 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.105) 0:00:37.203 ********** 2026-03-30 00:41:13.442419 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442430 | orchestrator | 2026-03-30 00:41:13.442441 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-30 00:41:13.442452 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.095) 0:00:37.299 ********** 2026-03-30 00:41:13.442463 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442473 | orchestrator | 2026-03-30 00:41:13.442484 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-30 00:41:13.442495 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.122) 0:00:37.421 ********** 2026-03-30 00:41:13.442506 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442517 | orchestrator | 2026-03-30 00:41:13.442528 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-30 00:41:13.442539 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.115) 0:00:37.537 ********** 2026-03-30 00:41:13.442550 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 00:41:13.442561 | orchestrator |  "ceph_osd_devices": { 2026-03-30 00:41:13.442572 | orchestrator |  "sdb": { 2026-03-30 00:41:13.442605 | orchestrator |  "osd_lvm_uuid": "6dc98b08-79a1-56b1-a9a0-4cf05631fa6f" 2026-03-30 00:41:13.442618 | orchestrator |  }, 2026-03-30 00:41:13.442629 | orchestrator |  "sdc": { 2026-03-30 00:41:13.442641 | orchestrator |  "osd_lvm_uuid": "b5c90778-4ce0-5f2b-bfca-518c358a14f4" 2026-03-30 00:41:13.442651 | orchestrator |  } 2026-03-30 00:41:13.442662 | orchestrator |  } 2026-03-30 00:41:13.442673 | orchestrator | } 2026-03-30 00:41:13.442684 | orchestrator | 2026-03-30 00:41:13.442704 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-30 00:41:13.442715 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.124) 0:00:37.662 ********** 2026-03-30 00:41:13.442726 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442737 | orchestrator | 2026-03-30 00:41:13.442748 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-30 00:41:13.442759 | orchestrator | Monday 30 March 2026 00:41:11 +0000 (0:00:00.110) 0:00:37.772 ********** 2026-03-30 00:41:13.442769 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442780 | orchestrator | 2026-03-30 00:41:13.442791 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-30 00:41:13.442802 | orchestrator | Monday 30 March 2026 00:41:12 +0000 (0:00:00.247) 0:00:38.020 ********** 2026-03-30 00:41:13.442813 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:41:13.442823 | orchestrator | 2026-03-30 00:41:13.442875 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-30 00:41:13.442887 | orchestrator | Monday 30 March 2026 00:41:12 +0000 (0:00:00.109) 0:00:38.129 ********** 2026-03-30 00:41:13.442898 | orchestrator | changed: [testbed-node-5] => { 2026-03-30 00:41:13.442909 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-30 00:41:13.442920 | orchestrator |  "ceph_osd_devices": { 2026-03-30 00:41:13.442931 | orchestrator |  "sdb": { 2026-03-30 00:41:13.442942 | orchestrator |  "osd_lvm_uuid": "6dc98b08-79a1-56b1-a9a0-4cf05631fa6f" 2026-03-30 00:41:13.442953 | orchestrator |  }, 2026-03-30 00:41:13.442964 | orchestrator |  "sdc": { 2026-03-30 00:41:13.442976 | orchestrator |  "osd_lvm_uuid": "b5c90778-4ce0-5f2b-bfca-518c358a14f4" 2026-03-30 00:41:13.442987 | orchestrator |  } 2026-03-30 00:41:13.442998 | orchestrator |  }, 2026-03-30 00:41:13.443009 | orchestrator |  "lvm_volumes": [ 2026-03-30 00:41:13.443020 | orchestrator |  { 2026-03-30 00:41:13.443031 | orchestrator |  "data": "osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f", 2026-03-30 00:41:13.443042 | orchestrator |  "data_vg": "ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f" 2026-03-30 00:41:13.443053 | orchestrator |  }, 2026-03-30 00:41:13.443068 | orchestrator |  { 2026-03-30 00:41:13.443079 | orchestrator |  "data": "osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4", 2026-03-30 00:41:13.443164 | orchestrator |  "data_vg": "ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4" 2026-03-30 00:41:13.443178 | orchestrator |  } 2026-03-30 00:41:13.443189 | orchestrator |  ] 2026-03-30 00:41:13.443200 | orchestrator |  } 2026-03-30 00:41:13.443212 | orchestrator | } 2026-03-30 00:41:13.443223 | orchestrator | 2026-03-30 00:41:13.443234 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-30 00:41:13.443245 | orchestrator | Monday 30 March 2026 00:41:12 +0000 (0:00:00.174) 0:00:38.303 ********** 2026-03-30 00:41:13.443256 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-30 00:41:13.443267 | orchestrator | 2026-03-30 00:41:13.443278 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:41:13.443289 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 00:41:13.443302 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 00:41:13.443313 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 00:41:13.443324 | orchestrator | 2026-03-30 00:41:13.443335 | orchestrator | 2026-03-30 00:41:13.443346 | orchestrator | 2026-03-30 00:41:13.443357 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:41:13.443367 | orchestrator | Monday 30 March 2026 00:41:13 +0000 (0:00:00.938) 0:00:39.242 ********** 2026-03-30 00:41:13.443388 | orchestrator | =============================================================================== 2026-03-30 00:41:13.443399 | orchestrator | Write configuration file ------------------------------------------------ 4.09s 2026-03-30 00:41:13.443410 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-03-30 00:41:13.443429 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2026-03-30 00:41:13.443441 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-03-30 00:41:13.443451 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2026-03-30 00:41:13.443462 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-03-30 00:41:13.443473 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-03-30 00:41:13.443484 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-30 00:41:13.443495 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-30 00:41:13.443506 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-03-30 00:41:13.443517 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-30 00:41:13.443528 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.63s 2026-03-30 00:41:13.443539 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.62s 2026-03-30 00:41:13.443560 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-30 00:41:13.643308 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-03-30 00:41:13.643412 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-03-30 00:41:13.643427 | orchestrator | Print configuration data ------------------------------------------------ 0.55s 2026-03-30 00:41:13.643439 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-03-30 00:41:13.643450 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.53s 2026-03-30 00:41:13.643462 | orchestrator | Set WAL devices config data --------------------------------------------- 0.52s 2026-03-30 00:41:35.149858 | orchestrator | 2026-03-30 00:41:35 | INFO  | Task 84a293ae-e8e6-4014-9f30-e8b2c121089b (sync inventory) is running in background. Output coming soon. 2026-03-30 00:42:05.406193 | orchestrator | 2026-03-30 00:41:36 | INFO  | Starting group_vars file reorganization 2026-03-30 00:42:05.406303 | orchestrator | 2026-03-30 00:41:36 | INFO  | Moved 0 file(s) to their respective directories 2026-03-30 00:42:05.406320 | orchestrator | 2026-03-30 00:41:36 | INFO  | Group_vars file reorganization completed 2026-03-30 00:42:05.406332 | orchestrator | 2026-03-30 00:41:39 | INFO  | Starting variable preparation from inventory 2026-03-30 00:42:05.406344 | orchestrator | 2026-03-30 00:41:42 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-30 00:42:05.406356 | orchestrator | 2026-03-30 00:41:42 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-30 00:42:05.406385 | orchestrator | 2026-03-30 00:41:42 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-30 00:42:05.406397 | orchestrator | 2026-03-30 00:41:42 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-30 00:42:05.406408 | orchestrator | 2026-03-30 00:41:42 | INFO  | Variable preparation completed 2026-03-30 00:42:05.406420 | orchestrator | 2026-03-30 00:41:43 | INFO  | Starting inventory overwrite handling 2026-03-30 00:42:05.406431 | orchestrator | 2026-03-30 00:41:43 | INFO  | Handling group overwrites in 99-overwrite 2026-03-30 00:42:05.406442 | orchestrator | 2026-03-30 00:41:43 | INFO  | Removing group frr:children from 60-generic 2026-03-30 00:42:05.406479 | orchestrator | 2026-03-30 00:41:43 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-30 00:42:05.406490 | orchestrator | 2026-03-30 00:41:43 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-30 00:42:05.406502 | orchestrator | 2026-03-30 00:41:43 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-30 00:42:05.406513 | orchestrator | 2026-03-30 00:41:43 | INFO  | Handling group overwrites in 20-roles 2026-03-30 00:42:05.406523 | orchestrator | 2026-03-30 00:41:43 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-30 00:42:05.406534 | orchestrator | 2026-03-30 00:41:43 | INFO  | Removed 5 group(s) in total 2026-03-30 00:42:05.406545 | orchestrator | 2026-03-30 00:41:43 | INFO  | Inventory overwrite handling completed 2026-03-30 00:42:05.406556 | orchestrator | 2026-03-30 00:41:44 | INFO  | Starting merge of inventory files 2026-03-30 00:42:05.406566 | orchestrator | 2026-03-30 00:41:44 | INFO  | Inventory files merged successfully 2026-03-30 00:42:05.406577 | orchestrator | 2026-03-30 00:41:49 | INFO  | Generating minified hosts file 2026-03-30 00:42:05.406588 | orchestrator | 2026-03-30 00:41:50 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-30 00:42:05.406600 | orchestrator | 2026-03-30 00:41:50 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-30 00:42:05.406611 | orchestrator | 2026-03-30 00:41:52 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-30 00:42:05.406622 | orchestrator | 2026-03-30 00:42:04 | INFO  | Successfully wrote ClusterShell configuration 2026-03-30 00:42:05.406633 | orchestrator | [master 9c0520f] 2026-03-30-00-42 2026-03-30 00:42:05.406645 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-30 00:42:05.406657 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-30 00:42:05.406668 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-30 00:42:05.406679 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-30 00:42:06.605253 | orchestrator | 2026-03-30 00:42:06 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-30 00:42:06.650516 | orchestrator | 2026-03-30 00:42:06 | INFO  | Task 5131ac95-7787-4d77-9e48-535a0de184c7 (ceph-create-lvm-devices) was prepared for execution. 2026-03-30 00:42:06.650599 | orchestrator | 2026-03-30 00:42:06 | INFO  | It takes a moment until task 5131ac95-7787-4d77-9e48-535a0de184c7 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-30 00:42:16.779145 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-30 00:42:16.779290 | orchestrator | 2.16.14 2026-03-30 00:42:16.779323 | orchestrator | 2026-03-30 00:42:16.779338 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-30 00:42:16.779350 | orchestrator | 2026-03-30 00:42:16.779365 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 00:42:16.779385 | orchestrator | Monday 30 March 2026 00:42:10 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-03-30 00:42:16.779404 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 00:42:16.779422 | orchestrator | 2026-03-30 00:42:16.779440 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-30 00:42:16.779459 | orchestrator | Monday 30 March 2026 00:42:10 +0000 (0:00:00.235) 0:00:00.479 ********** 2026-03-30 00:42:16.779477 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:16.779496 | orchestrator | 2026-03-30 00:42:16.779514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.779533 | orchestrator | Monday 30 March 2026 00:42:10 +0000 (0:00:00.291) 0:00:00.771 ********** 2026-03-30 00:42:16.779587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-30 00:42:16.779608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-30 00:42:16.779628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-30 00:42:16.779648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-30 00:42:16.779667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-30 00:42:16.779685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-30 00:42:16.779697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-30 00:42:16.779709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-30 00:42:16.779722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-30 00:42:16.779734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-30 00:42:16.779746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-30 00:42:16.779803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-30 00:42:16.779818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-30 00:42:16.779830 | orchestrator | 2026-03-30 00:42:16.779843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.779855 | orchestrator | Monday 30 March 2026 00:42:11 +0000 (0:00:00.379) 0:00:01.151 ********** 2026-03-30 00:42:16.779868 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.779881 | orchestrator | 2026-03-30 00:42:16.779894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.779906 | orchestrator | Monday 30 March 2026 00:42:11 +0000 (0:00:00.373) 0:00:01.524 ********** 2026-03-30 00:42:16.779919 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.779932 | orchestrator | 2026-03-30 00:42:16.779944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.779956 | orchestrator | Monday 30 March 2026 00:42:11 +0000 (0:00:00.174) 0:00:01.698 ********** 2026-03-30 00:42:16.779987 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780000 | orchestrator | 2026-03-30 00:42:16.780012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780023 | orchestrator | Monday 30 March 2026 00:42:11 +0000 (0:00:00.181) 0:00:01.880 ********** 2026-03-30 00:42:16.780034 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780045 | orchestrator | 2026-03-30 00:42:16.780055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780066 | orchestrator | Monday 30 March 2026 00:42:11 +0000 (0:00:00.192) 0:00:02.072 ********** 2026-03-30 00:42:16.780077 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780087 | orchestrator | 2026-03-30 00:42:16.780098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780109 | orchestrator | Monday 30 March 2026 00:42:12 +0000 (0:00:00.156) 0:00:02.229 ********** 2026-03-30 00:42:16.780119 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780130 | orchestrator | 2026-03-30 00:42:16.780141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780152 | orchestrator | Monday 30 March 2026 00:42:12 +0000 (0:00:00.184) 0:00:02.414 ********** 2026-03-30 00:42:16.780163 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780173 | orchestrator | 2026-03-30 00:42:16.780184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780195 | orchestrator | Monday 30 March 2026 00:42:12 +0000 (0:00:00.190) 0:00:02.604 ********** 2026-03-30 00:42:16.780206 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780228 | orchestrator | 2026-03-30 00:42:16.780239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780250 | orchestrator | Monday 30 March 2026 00:42:12 +0000 (0:00:00.172) 0:00:02.776 ********** 2026-03-30 00:42:16.780260 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0) 2026-03-30 00:42:16.780273 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0) 2026-03-30 00:42:16.780283 | orchestrator | 2026-03-30 00:42:16.780294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780327 | orchestrator | Monday 30 March 2026 00:42:13 +0000 (0:00:00.374) 0:00:03.151 ********** 2026-03-30 00:42:16.780339 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543) 2026-03-30 00:42:16.780350 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543) 2026-03-30 00:42:16.780361 | orchestrator | 2026-03-30 00:42:16.780371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780382 | orchestrator | Monday 30 March 2026 00:42:13 +0000 (0:00:00.364) 0:00:03.515 ********** 2026-03-30 00:42:16.780392 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf) 2026-03-30 00:42:16.780403 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf) 2026-03-30 00:42:16.780414 | orchestrator | 2026-03-30 00:42:16.780425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780435 | orchestrator | Monday 30 March 2026 00:42:13 +0000 (0:00:00.551) 0:00:04.067 ********** 2026-03-30 00:42:16.780446 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f) 2026-03-30 00:42:16.780457 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f) 2026-03-30 00:42:16.780467 | orchestrator | 2026-03-30 00:42:16.780478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:16.780489 | orchestrator | Monday 30 March 2026 00:42:14 +0000 (0:00:00.561) 0:00:04.629 ********** 2026-03-30 00:42:16.780499 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-30 00:42:16.780510 | orchestrator | 2026-03-30 00:42:16.780521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780537 | orchestrator | Monday 30 March 2026 00:42:15 +0000 (0:00:00.562) 0:00:05.191 ********** 2026-03-30 00:42:16.780548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-30 00:42:16.780558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-30 00:42:16.780569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-30 00:42:16.780580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-30 00:42:16.780590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-30 00:42:16.780601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-30 00:42:16.780612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-30 00:42:16.780622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-30 00:42:16.780633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-30 00:42:16.780644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-30 00:42:16.780654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-30 00:42:16.780665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-30 00:42:16.780682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-30 00:42:16.780693 | orchestrator | 2026-03-30 00:42:16.780704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780715 | orchestrator | Monday 30 March 2026 00:42:15 +0000 (0:00:00.347) 0:00:05.539 ********** 2026-03-30 00:42:16.780725 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780736 | orchestrator | 2026-03-30 00:42:16.780747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780793 | orchestrator | Monday 30 March 2026 00:42:15 +0000 (0:00:00.207) 0:00:05.746 ********** 2026-03-30 00:42:16.780807 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780818 | orchestrator | 2026-03-30 00:42:16.780828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780839 | orchestrator | Monday 30 March 2026 00:42:15 +0000 (0:00:00.208) 0:00:05.955 ********** 2026-03-30 00:42:16.780850 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780861 | orchestrator | 2026-03-30 00:42:16.780871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780882 | orchestrator | Monday 30 March 2026 00:42:16 +0000 (0:00:00.165) 0:00:06.121 ********** 2026-03-30 00:42:16.780892 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780903 | orchestrator | 2026-03-30 00:42:16.780914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780924 | orchestrator | Monday 30 March 2026 00:42:16 +0000 (0:00:00.197) 0:00:06.318 ********** 2026-03-30 00:42:16.780935 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780946 | orchestrator | 2026-03-30 00:42:16.780956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.780967 | orchestrator | Monday 30 March 2026 00:42:16 +0000 (0:00:00.184) 0:00:06.503 ********** 2026-03-30 00:42:16.780978 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.780989 | orchestrator | 2026-03-30 00:42:16.780999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:16.781010 | orchestrator | Monday 30 March 2026 00:42:16 +0000 (0:00:00.182) 0:00:06.685 ********** 2026-03-30 00:42:16.781021 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:16.781032 | orchestrator | 2026-03-30 00:42:16.781048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:25.081437 | orchestrator | Monday 30 March 2026 00:42:16 +0000 (0:00:00.195) 0:00:06.881 ********** 2026-03-30 00:42:25.081525 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081536 | orchestrator | 2026-03-30 00:42:25.081544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:25.081551 | orchestrator | Monday 30 March 2026 00:42:17 +0000 (0:00:00.225) 0:00:07.107 ********** 2026-03-30 00:42:25.081558 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-30 00:42:25.081566 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-30 00:42:25.081573 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-30 00:42:25.081580 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-30 00:42:25.081586 | orchestrator | 2026-03-30 00:42:25.081593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:25.081600 | orchestrator | Monday 30 March 2026 00:42:18 +0000 (0:00:01.188) 0:00:08.296 ********** 2026-03-30 00:42:25.081607 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081613 | orchestrator | 2026-03-30 00:42:25.081620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:25.081627 | orchestrator | Monday 30 March 2026 00:42:18 +0000 (0:00:00.208) 0:00:08.504 ********** 2026-03-30 00:42:25.081633 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081640 | orchestrator | 2026-03-30 00:42:25.081646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:25.081672 | orchestrator | Monday 30 March 2026 00:42:18 +0000 (0:00:00.220) 0:00:08.725 ********** 2026-03-30 00:42:25.081679 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081685 | orchestrator | 2026-03-30 00:42:25.081692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:25.081698 | orchestrator | Monday 30 March 2026 00:42:18 +0000 (0:00:00.213) 0:00:08.939 ********** 2026-03-30 00:42:25.081705 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081711 | orchestrator | 2026-03-30 00:42:25.081718 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-30 00:42:25.081725 | orchestrator | Monday 30 March 2026 00:42:19 +0000 (0:00:00.225) 0:00:09.165 ********** 2026-03-30 00:42:25.081731 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081738 | orchestrator | 2026-03-30 00:42:25.081744 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-30 00:42:25.081772 | orchestrator | Monday 30 March 2026 00:42:19 +0000 (0:00:00.158) 0:00:09.323 ********** 2026-03-30 00:42:25.081781 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}}) 2026-03-30 00:42:25.081788 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'deb01b05-78a2-5c26-94fe-c042bb294237'}}) 2026-03-30 00:42:25.081795 | orchestrator | 2026-03-30 00:42:25.081802 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-30 00:42:25.081809 | orchestrator | Monday 30 March 2026 00:42:19 +0000 (0:00:00.205) 0:00:09.529 ********** 2026-03-30 00:42:25.081816 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}) 2026-03-30 00:42:25.081824 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'}) 2026-03-30 00:42:25.081831 | orchestrator | 2026-03-30 00:42:25.081838 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-30 00:42:25.081845 | orchestrator | Monday 30 March 2026 00:42:21 +0000 (0:00:02.175) 0:00:11.704 ********** 2026-03-30 00:42:25.081851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.081875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.081882 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081889 | orchestrator | 2026-03-30 00:42:25.081896 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-30 00:42:25.081902 | orchestrator | Monday 30 March 2026 00:42:21 +0000 (0:00:00.180) 0:00:11.885 ********** 2026-03-30 00:42:25.081909 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}) 2026-03-30 00:42:25.081916 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'}) 2026-03-30 00:42:25.081922 | orchestrator | 2026-03-30 00:42:25.081929 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-30 00:42:25.081936 | orchestrator | Monday 30 March 2026 00:42:23 +0000 (0:00:01.471) 0:00:13.357 ********** 2026-03-30 00:42:25.081942 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.081949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.081956 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.081963 | orchestrator | 2026-03-30 00:42:25.081969 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-30 00:42:25.081983 | orchestrator | Monday 30 March 2026 00:42:23 +0000 (0:00:00.151) 0:00:13.509 ********** 2026-03-30 00:42:25.082004 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082011 | orchestrator | 2026-03-30 00:42:25.082057 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-30 00:42:25.082064 | orchestrator | Monday 30 March 2026 00:42:23 +0000 (0:00:00.134) 0:00:13.644 ********** 2026-03-30 00:42:25.082071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.082078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.082085 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082091 | orchestrator | 2026-03-30 00:42:25.082098 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-30 00:42:25.082105 | orchestrator | Monday 30 March 2026 00:42:23 +0000 (0:00:00.280) 0:00:13.924 ********** 2026-03-30 00:42:25.082112 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082118 | orchestrator | 2026-03-30 00:42:25.082125 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-30 00:42:25.082132 | orchestrator | Monday 30 March 2026 00:42:23 +0000 (0:00:00.129) 0:00:14.054 ********** 2026-03-30 00:42:25.082138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.082145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.082152 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082158 | orchestrator | 2026-03-30 00:42:25.082170 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-30 00:42:25.082176 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.137) 0:00:14.192 ********** 2026-03-30 00:42:25.082183 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082190 | orchestrator | 2026-03-30 00:42:25.082196 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-30 00:42:25.082203 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.131) 0:00:14.324 ********** 2026-03-30 00:42:25.082210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.082216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.082223 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082230 | orchestrator | 2026-03-30 00:42:25.082236 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-30 00:42:25.082243 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.153) 0:00:14.478 ********** 2026-03-30 00:42:25.082250 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:25.082256 | orchestrator | 2026-03-30 00:42:25.082263 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-30 00:42:25.082270 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.130) 0:00:14.608 ********** 2026-03-30 00:42:25.082281 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.082291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.082302 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082313 | orchestrator | 2026-03-30 00:42:25.082324 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-30 00:42:25.082343 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.152) 0:00:14.761 ********** 2026-03-30 00:42:25.082350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.082357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.082363 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082370 | orchestrator | 2026-03-30 00:42:25.082377 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-30 00:42:25.082383 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.149) 0:00:14.910 ********** 2026-03-30 00:42:25.082390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:25.082397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:25.082403 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082410 | orchestrator | 2026-03-30 00:42:25.082416 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-30 00:42:25.082423 | orchestrator | Monday 30 March 2026 00:42:24 +0000 (0:00:00.145) 0:00:15.056 ********** 2026-03-30 00:42:25.082430 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:25.082437 | orchestrator | 2026-03-30 00:42:25.082443 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-30 00:42:25.082455 | orchestrator | Monday 30 March 2026 00:42:25 +0000 (0:00:00.127) 0:00:15.184 ********** 2026-03-30 00:42:30.737427 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.737524 | orchestrator | 2026-03-30 00:42:30.737538 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-30 00:42:30.737549 | orchestrator | Monday 30 March 2026 00:42:25 +0000 (0:00:00.125) 0:00:15.309 ********** 2026-03-30 00:42:30.737558 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.737568 | orchestrator | 2026-03-30 00:42:30.737577 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-30 00:42:30.737585 | orchestrator | Monday 30 March 2026 00:42:25 +0000 (0:00:00.154) 0:00:15.464 ********** 2026-03-30 00:42:30.737594 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 00:42:30.737605 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-30 00:42:30.737615 | orchestrator | } 2026-03-30 00:42:30.737625 | orchestrator | 2026-03-30 00:42:30.737634 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-30 00:42:30.737644 | orchestrator | Monday 30 March 2026 00:42:25 +0000 (0:00:00.255) 0:00:15.720 ********** 2026-03-30 00:42:30.737653 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 00:42:30.737663 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-30 00:42:30.737671 | orchestrator | } 2026-03-30 00:42:30.737681 | orchestrator | 2026-03-30 00:42:30.737690 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-30 00:42:30.737699 | orchestrator | Monday 30 March 2026 00:42:25 +0000 (0:00:00.133) 0:00:15.853 ********** 2026-03-30 00:42:30.737708 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 00:42:30.737718 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-30 00:42:30.737727 | orchestrator | } 2026-03-30 00:42:30.737735 | orchestrator | 2026-03-30 00:42:30.737777 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-30 00:42:30.737787 | orchestrator | Monday 30 March 2026 00:42:25 +0000 (0:00:00.133) 0:00:15.987 ********** 2026-03-30 00:42:30.737796 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:30.737805 | orchestrator | 2026-03-30 00:42:30.737814 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-30 00:42:30.737823 | orchestrator | Monday 30 March 2026 00:42:26 +0000 (0:00:00.633) 0:00:16.620 ********** 2026-03-30 00:42:30.737851 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:30.737857 | orchestrator | 2026-03-30 00:42:30.737863 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-30 00:42:30.737868 | orchestrator | Monday 30 March 2026 00:42:27 +0000 (0:00:00.521) 0:00:17.142 ********** 2026-03-30 00:42:30.737874 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:30.737879 | orchestrator | 2026-03-30 00:42:30.737884 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-30 00:42:30.737892 | orchestrator | Monday 30 March 2026 00:42:27 +0000 (0:00:00.521) 0:00:17.663 ********** 2026-03-30 00:42:30.737900 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:30.737909 | orchestrator | 2026-03-30 00:42:30.737915 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-30 00:42:30.737921 | orchestrator | Monday 30 March 2026 00:42:27 +0000 (0:00:00.148) 0:00:17.812 ********** 2026-03-30 00:42:30.737926 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.737931 | orchestrator | 2026-03-30 00:42:30.737937 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-30 00:42:30.737942 | orchestrator | Monday 30 March 2026 00:42:27 +0000 (0:00:00.108) 0:00:17.921 ********** 2026-03-30 00:42:30.737948 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.737953 | orchestrator | 2026-03-30 00:42:30.737958 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-30 00:42:30.737964 | orchestrator | Monday 30 March 2026 00:42:27 +0000 (0:00:00.103) 0:00:18.024 ********** 2026-03-30 00:42:30.737972 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 00:42:30.737981 | orchestrator |  "vgs_report": { 2026-03-30 00:42:30.737991 | orchestrator |  "vg": [] 2026-03-30 00:42:30.738000 | orchestrator |  } 2026-03-30 00:42:30.738009 | orchestrator | } 2026-03-30 00:42:30.738065 | orchestrator | 2026-03-30 00:42:30.738075 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-30 00:42:30.738081 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.118) 0:00:18.143 ********** 2026-03-30 00:42:30.738087 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738095 | orchestrator | 2026-03-30 00:42:30.738104 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-30 00:42:30.738113 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.119) 0:00:18.263 ********** 2026-03-30 00:42:30.738122 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738131 | orchestrator | 2026-03-30 00:42:30.738141 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-30 00:42:30.738150 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.121) 0:00:18.384 ********** 2026-03-30 00:42:30.738159 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738169 | orchestrator | 2026-03-30 00:42:30.738179 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-30 00:42:30.738188 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.246) 0:00:18.631 ********** 2026-03-30 00:42:30.738197 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738206 | orchestrator | 2026-03-30 00:42:30.738215 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-30 00:42:30.738225 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.127) 0:00:18.759 ********** 2026-03-30 00:42:30.738234 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738243 | orchestrator | 2026-03-30 00:42:30.738252 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-30 00:42:30.738262 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.119) 0:00:18.878 ********** 2026-03-30 00:42:30.738273 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738283 | orchestrator | 2026-03-30 00:42:30.738293 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-30 00:42:30.738302 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.099) 0:00:18.977 ********** 2026-03-30 00:42:30.738312 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738330 | orchestrator | 2026-03-30 00:42:30.738340 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-30 00:42:30.738348 | orchestrator | Monday 30 March 2026 00:42:28 +0000 (0:00:00.118) 0:00:19.095 ********** 2026-03-30 00:42:30.738375 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738384 | orchestrator | 2026-03-30 00:42:30.738410 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-30 00:42:30.738417 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.127) 0:00:19.223 ********** 2026-03-30 00:42:30.738422 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738427 | orchestrator | 2026-03-30 00:42:30.738433 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-30 00:42:30.738439 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.107) 0:00:19.331 ********** 2026-03-30 00:42:30.738444 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738451 | orchestrator | 2026-03-30 00:42:30.738461 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-30 00:42:30.738469 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.124) 0:00:19.455 ********** 2026-03-30 00:42:30.738478 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738487 | orchestrator | 2026-03-30 00:42:30.738496 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-30 00:42:30.738505 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.120) 0:00:19.576 ********** 2026-03-30 00:42:30.738514 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738524 | orchestrator | 2026-03-30 00:42:30.738533 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-30 00:42:30.738541 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.121) 0:00:19.697 ********** 2026-03-30 00:42:30.738550 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738559 | orchestrator | 2026-03-30 00:42:30.738568 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-30 00:42:30.738577 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.114) 0:00:19.812 ********** 2026-03-30 00:42:30.738586 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738595 | orchestrator | 2026-03-30 00:42:30.738609 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-30 00:42:30.738618 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.109) 0:00:19.922 ********** 2026-03-30 00:42:30.738628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:30.738639 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:30.738647 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738653 | orchestrator | 2026-03-30 00:42:30.738658 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-30 00:42:30.738663 | orchestrator | Monday 30 March 2026 00:42:29 +0000 (0:00:00.154) 0:00:20.076 ********** 2026-03-30 00:42:30.738668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:30.738674 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:30.738679 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738684 | orchestrator | 2026-03-30 00:42:30.738690 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-30 00:42:30.738695 | orchestrator | Monday 30 March 2026 00:42:30 +0000 (0:00:00.270) 0:00:20.347 ********** 2026-03-30 00:42:30.738700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:30.738706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:30.738717 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738722 | orchestrator | 2026-03-30 00:42:30.738727 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-30 00:42:30.738733 | orchestrator | Monday 30 March 2026 00:42:30 +0000 (0:00:00.144) 0:00:20.492 ********** 2026-03-30 00:42:30.738738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:30.738758 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:30.738764 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738769 | orchestrator | 2026-03-30 00:42:30.738774 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-30 00:42:30.738779 | orchestrator | Monday 30 March 2026 00:42:30 +0000 (0:00:00.150) 0:00:20.642 ********** 2026-03-30 00:42:30.738785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:30.738792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:30.738801 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:30.738810 | orchestrator | 2026-03-30 00:42:30.738818 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-30 00:42:30.738827 | orchestrator | Monday 30 March 2026 00:42:30 +0000 (0:00:00.131) 0:00:20.774 ********** 2026-03-30 00:42:30.738842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:36.030696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:36.030836 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:36.030857 | orchestrator | 2026-03-30 00:42:36.030870 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-30 00:42:36.030883 | orchestrator | Monday 30 March 2026 00:42:30 +0000 (0:00:00.148) 0:00:20.922 ********** 2026-03-30 00:42:36.030895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:36.030907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:36.030918 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:36.030929 | orchestrator | 2026-03-30 00:42:36.030940 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-30 00:42:36.030951 | orchestrator | Monday 30 March 2026 00:42:30 +0000 (0:00:00.137) 0:00:21.059 ********** 2026-03-30 00:42:36.030962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:36.030989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:36.031001 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:36.031012 | orchestrator | 2026-03-30 00:42:36.031023 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-30 00:42:36.031034 | orchestrator | Monday 30 March 2026 00:42:31 +0000 (0:00:00.142) 0:00:21.201 ********** 2026-03-30 00:42:36.031045 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:36.031057 | orchestrator | 2026-03-30 00:42:36.031093 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-30 00:42:36.031105 | orchestrator | Monday 30 March 2026 00:42:31 +0000 (0:00:00.488) 0:00:21.690 ********** 2026-03-30 00:42:36.031116 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:36.031126 | orchestrator | 2026-03-30 00:42:36.031137 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-30 00:42:36.031148 | orchestrator | Monday 30 March 2026 00:42:32 +0000 (0:00:00.505) 0:00:22.196 ********** 2026-03-30 00:42:36.031158 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:42:36.031169 | orchestrator | 2026-03-30 00:42:36.031180 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-30 00:42:36.031190 | orchestrator | Monday 30 March 2026 00:42:32 +0000 (0:00:00.137) 0:00:22.333 ********** 2026-03-30 00:42:36.031202 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'vg_name': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}) 2026-03-30 00:42:36.031214 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'vg_name': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'}) 2026-03-30 00:42:36.031225 | orchestrator | 2026-03-30 00:42:36.031236 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-30 00:42:36.031247 | orchestrator | Monday 30 March 2026 00:42:32 +0000 (0:00:00.169) 0:00:22.502 ********** 2026-03-30 00:42:36.031258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:36.031269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:36.031280 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:36.031291 | orchestrator | 2026-03-30 00:42:36.031301 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-30 00:42:36.031312 | orchestrator | Monday 30 March 2026 00:42:32 +0000 (0:00:00.148) 0:00:22.651 ********** 2026-03-30 00:42:36.031323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:36.031334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:36.031345 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:36.031356 | orchestrator | 2026-03-30 00:42:36.031366 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-30 00:42:36.031377 | orchestrator | Monday 30 March 2026 00:42:32 +0000 (0:00:00.438) 0:00:23.089 ********** 2026-03-30 00:42:36.031388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'})  2026-03-30 00:42:36.031399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'})  2026-03-30 00:42:36.031409 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:42:36.031420 | orchestrator | 2026-03-30 00:42:36.031431 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-30 00:42:36.031442 | orchestrator | Monday 30 March 2026 00:42:33 +0000 (0:00:00.158) 0:00:23.248 ********** 2026-03-30 00:42:36.031469 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 00:42:36.031481 | orchestrator |  "lvm_report": { 2026-03-30 00:42:36.031493 | orchestrator |  "lv": [ 2026-03-30 00:42:36.031504 | orchestrator |  { 2026-03-30 00:42:36.031515 | orchestrator |  "lv_name": "osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17", 2026-03-30 00:42:36.031527 | orchestrator |  "vg_name": "ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17" 2026-03-30 00:42:36.031538 | orchestrator |  }, 2026-03-30 00:42:36.031558 | orchestrator |  { 2026-03-30 00:42:36.031569 | orchestrator |  "lv_name": "osd-block-deb01b05-78a2-5c26-94fe-c042bb294237", 2026-03-30 00:42:36.031580 | orchestrator |  "vg_name": "ceph-deb01b05-78a2-5c26-94fe-c042bb294237" 2026-03-30 00:42:36.031591 | orchestrator |  } 2026-03-30 00:42:36.031602 | orchestrator |  ], 2026-03-30 00:42:36.031613 | orchestrator |  "pv": [ 2026-03-30 00:42:36.031624 | orchestrator |  { 2026-03-30 00:42:36.031635 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-30 00:42:36.031646 | orchestrator |  "vg_name": "ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17" 2026-03-30 00:42:36.031657 | orchestrator |  }, 2026-03-30 00:42:36.031668 | orchestrator |  { 2026-03-30 00:42:36.031680 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-30 00:42:36.031691 | orchestrator |  "vg_name": "ceph-deb01b05-78a2-5c26-94fe-c042bb294237" 2026-03-30 00:42:36.031702 | orchestrator |  } 2026-03-30 00:42:36.031712 | orchestrator |  ] 2026-03-30 00:42:36.031724 | orchestrator |  } 2026-03-30 00:42:36.031735 | orchestrator | } 2026-03-30 00:42:36.031768 | orchestrator | 2026-03-30 00:42:36.031780 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-30 00:42:36.031791 | orchestrator | 2026-03-30 00:42:36.031802 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 00:42:36.031813 | orchestrator | Monday 30 March 2026 00:42:33 +0000 (0:00:00.304) 0:00:23.552 ********** 2026-03-30 00:42:36.031824 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-30 00:42:36.031835 | orchestrator | 2026-03-30 00:42:36.031846 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-30 00:42:36.031857 | orchestrator | Monday 30 March 2026 00:42:33 +0000 (0:00:00.261) 0:00:23.814 ********** 2026-03-30 00:42:36.031868 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:36.031879 | orchestrator | 2026-03-30 00:42:36.031890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.031901 | orchestrator | Monday 30 March 2026 00:42:33 +0000 (0:00:00.234) 0:00:24.049 ********** 2026-03-30 00:42:36.031912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-30 00:42:36.031922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-30 00:42:36.031933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-30 00:42:36.031943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-30 00:42:36.031954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-30 00:42:36.031965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-30 00:42:36.031975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-30 00:42:36.031986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-30 00:42:36.031997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-30 00:42:36.032015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-30 00:42:36.032027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-30 00:42:36.032037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-30 00:42:36.032048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-30 00:42:36.032058 | orchestrator | 2026-03-30 00:42:36.032069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.032080 | orchestrator | Monday 30 March 2026 00:42:34 +0000 (0:00:00.408) 0:00:24.458 ********** 2026-03-30 00:42:36.032090 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:36.032108 | orchestrator | 2026-03-30 00:42:36.032119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.032130 | orchestrator | Monday 30 March 2026 00:42:34 +0000 (0:00:00.186) 0:00:24.644 ********** 2026-03-30 00:42:36.032140 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:36.032151 | orchestrator | 2026-03-30 00:42:36.032162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.032172 | orchestrator | Monday 30 March 2026 00:42:34 +0000 (0:00:00.207) 0:00:24.852 ********** 2026-03-30 00:42:36.032183 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:36.032194 | orchestrator | 2026-03-30 00:42:36.032204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.032215 | orchestrator | Monday 30 March 2026 00:42:34 +0000 (0:00:00.196) 0:00:25.048 ********** 2026-03-30 00:42:36.032226 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:36.032237 | orchestrator | 2026-03-30 00:42:36.032248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.032259 | orchestrator | Monday 30 March 2026 00:42:35 +0000 (0:00:00.620) 0:00:25.669 ********** 2026-03-30 00:42:36.032269 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:36.032280 | orchestrator | 2026-03-30 00:42:36.032291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:36.032301 | orchestrator | Monday 30 March 2026 00:42:35 +0000 (0:00:00.209) 0:00:25.879 ********** 2026-03-30 00:42:36.032312 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:36.032323 | orchestrator | 2026-03-30 00:42:36.032341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248371 | orchestrator | Monday 30 March 2026 00:42:36 +0000 (0:00:00.254) 0:00:26.133 ********** 2026-03-30 00:42:47.248460 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.248470 | orchestrator | 2026-03-30 00:42:47.248478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248484 | orchestrator | Monday 30 March 2026 00:42:36 +0000 (0:00:00.226) 0:00:26.360 ********** 2026-03-30 00:42:47.248491 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.248498 | orchestrator | 2026-03-30 00:42:47.248505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248511 | orchestrator | Monday 30 March 2026 00:42:36 +0000 (0:00:00.191) 0:00:26.552 ********** 2026-03-30 00:42:47.248518 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf) 2026-03-30 00:42:47.248525 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf) 2026-03-30 00:42:47.248532 | orchestrator | 2026-03-30 00:42:47.248539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248546 | orchestrator | Monday 30 March 2026 00:42:36 +0000 (0:00:00.431) 0:00:26.983 ********** 2026-03-30 00:42:47.248552 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a) 2026-03-30 00:42:47.248559 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a) 2026-03-30 00:42:47.248566 | orchestrator | 2026-03-30 00:42:47.248586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248593 | orchestrator | Monday 30 March 2026 00:42:37 +0000 (0:00:00.425) 0:00:27.409 ********** 2026-03-30 00:42:47.248600 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec) 2026-03-30 00:42:47.248606 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec) 2026-03-30 00:42:47.248613 | orchestrator | 2026-03-30 00:42:47.248620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248626 | orchestrator | Monday 30 March 2026 00:42:37 +0000 (0:00:00.434) 0:00:27.844 ********** 2026-03-30 00:42:47.248633 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a) 2026-03-30 00:42:47.248660 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a) 2026-03-30 00:42:47.248667 | orchestrator | 2026-03-30 00:42:47.248673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:42:47.248680 | orchestrator | Monday 30 March 2026 00:42:38 +0000 (0:00:00.511) 0:00:28.355 ********** 2026-03-30 00:42:47.248687 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-30 00:42:47.248693 | orchestrator | 2026-03-30 00:42:47.248701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.248708 | orchestrator | Monday 30 March 2026 00:42:38 +0000 (0:00:00.357) 0:00:28.713 ********** 2026-03-30 00:42:47.248714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-30 00:42:47.248721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-30 00:42:47.248783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-30 00:42:47.248790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-30 00:42:47.248797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-30 00:42:47.248803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-30 00:42:47.248809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-30 00:42:47.248815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-30 00:42:47.248822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-30 00:42:47.248828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-30 00:42:47.248834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-30 00:42:47.248841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-30 00:42:47.248848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-30 00:42:47.248854 | orchestrator | 2026-03-30 00:42:47.248860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.248866 | orchestrator | Monday 30 March 2026 00:42:39 +0000 (0:00:01.026) 0:00:29.739 ********** 2026-03-30 00:42:47.248873 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.248879 | orchestrator | 2026-03-30 00:42:47.248886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.248892 | orchestrator | Monday 30 March 2026 00:42:39 +0000 (0:00:00.242) 0:00:29.982 ********** 2026-03-30 00:42:47.248899 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.248905 | orchestrator | 2026-03-30 00:42:47.248912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.248919 | orchestrator | Monday 30 March 2026 00:42:40 +0000 (0:00:00.253) 0:00:30.236 ********** 2026-03-30 00:42:47.248926 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.248932 | orchestrator | 2026-03-30 00:42:47.248954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.248961 | orchestrator | Monday 30 March 2026 00:42:40 +0000 (0:00:00.210) 0:00:30.446 ********** 2026-03-30 00:42:47.248968 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.248975 | orchestrator | 2026-03-30 00:42:47.248981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.248988 | orchestrator | Monday 30 March 2026 00:42:40 +0000 (0:00:00.254) 0:00:30.701 ********** 2026-03-30 00:42:47.248995 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249001 | orchestrator | 2026-03-30 00:42:47.249008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249022 | orchestrator | Monday 30 March 2026 00:42:40 +0000 (0:00:00.241) 0:00:30.942 ********** 2026-03-30 00:42:47.249028 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249035 | orchestrator | 2026-03-30 00:42:47.249041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249049 | orchestrator | Monday 30 March 2026 00:42:41 +0000 (0:00:00.218) 0:00:31.160 ********** 2026-03-30 00:42:47.249055 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249062 | orchestrator | 2026-03-30 00:42:47.249068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249075 | orchestrator | Monday 30 March 2026 00:42:41 +0000 (0:00:00.201) 0:00:31.362 ********** 2026-03-30 00:42:47.249081 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249088 | orchestrator | 2026-03-30 00:42:47.249094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249106 | orchestrator | Monday 30 March 2026 00:42:41 +0000 (0:00:00.235) 0:00:31.597 ********** 2026-03-30 00:42:47.249112 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-30 00:42:47.249119 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-30 00:42:47.249126 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-30 00:42:47.249133 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-30 00:42:47.249139 | orchestrator | 2026-03-30 00:42:47.249146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249152 | orchestrator | Monday 30 March 2026 00:42:42 +0000 (0:00:00.877) 0:00:32.475 ********** 2026-03-30 00:42:47.249159 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249166 | orchestrator | 2026-03-30 00:42:47.249172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249179 | orchestrator | Monday 30 March 2026 00:42:42 +0000 (0:00:00.183) 0:00:32.659 ********** 2026-03-30 00:42:47.249185 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249192 | orchestrator | 2026-03-30 00:42:47.249198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249205 | orchestrator | Monday 30 March 2026 00:42:42 +0000 (0:00:00.196) 0:00:32.855 ********** 2026-03-30 00:42:47.249211 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249218 | orchestrator | 2026-03-30 00:42:47.249225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:42:47.249231 | orchestrator | Monday 30 March 2026 00:42:43 +0000 (0:00:00.675) 0:00:33.531 ********** 2026-03-30 00:42:47.249238 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249244 | orchestrator | 2026-03-30 00:42:47.249251 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-30 00:42:47.249257 | orchestrator | Monday 30 March 2026 00:42:43 +0000 (0:00:00.204) 0:00:33.735 ********** 2026-03-30 00:42:47.249264 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249271 | orchestrator | 2026-03-30 00:42:47.249277 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-30 00:42:47.249284 | orchestrator | Monday 30 March 2026 00:42:43 +0000 (0:00:00.121) 0:00:33.857 ********** 2026-03-30 00:42:47.249290 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e5d1498-d7a5-5a93-a004-d1785e71aab2'}}) 2026-03-30 00:42:47.249297 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae410091-a002-50e8-b50c-29c9b1a933c3'}}) 2026-03-30 00:42:47.249304 | orchestrator | 2026-03-30 00:42:47.249310 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-30 00:42:47.249316 | orchestrator | Monday 30 March 2026 00:42:43 +0000 (0:00:00.193) 0:00:34.050 ********** 2026-03-30 00:42:47.249324 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'}) 2026-03-30 00:42:47.249332 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'}) 2026-03-30 00:42:47.249344 | orchestrator | 2026-03-30 00:42:47.249351 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-30 00:42:47.249358 | orchestrator | Monday 30 March 2026 00:42:45 +0000 (0:00:01.877) 0:00:35.928 ********** 2026-03-30 00:42:47.249365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:47.249373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:47.249379 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:47.249385 | orchestrator | 2026-03-30 00:42:47.249392 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-30 00:42:47.249399 | orchestrator | Monday 30 March 2026 00:42:45 +0000 (0:00:00.141) 0:00:36.070 ********** 2026-03-30 00:42:47.249405 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'}) 2026-03-30 00:42:47.249417 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'}) 2026-03-30 00:42:52.805084 | orchestrator | 2026-03-30 00:42:52.805195 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-30 00:42:52.805212 | orchestrator | Monday 30 March 2026 00:42:47 +0000 (0:00:01.369) 0:00:37.439 ********** 2026-03-30 00:42:52.805224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805249 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805261 | orchestrator | 2026-03-30 00:42:52.805273 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-30 00:42:52.805284 | orchestrator | Monday 30 March 2026 00:42:47 +0000 (0:00:00.145) 0:00:37.585 ********** 2026-03-30 00:42:52.805295 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805306 | orchestrator | 2026-03-30 00:42:52.805317 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-30 00:42:52.805328 | orchestrator | Monday 30 March 2026 00:42:47 +0000 (0:00:00.130) 0:00:37.715 ********** 2026-03-30 00:42:52.805340 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805362 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805374 | orchestrator | 2026-03-30 00:42:52.805385 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-30 00:42:52.805396 | orchestrator | Monday 30 March 2026 00:42:47 +0000 (0:00:00.151) 0:00:37.867 ********** 2026-03-30 00:42:52.805407 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805418 | orchestrator | 2026-03-30 00:42:52.805429 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-30 00:42:52.805441 | orchestrator | Monday 30 March 2026 00:42:47 +0000 (0:00:00.130) 0:00:37.997 ********** 2026-03-30 00:42:52.805452 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805502 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805513 | orchestrator | 2026-03-30 00:42:52.805524 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-30 00:42:52.805536 | orchestrator | Monday 30 March 2026 00:42:48 +0000 (0:00:00.162) 0:00:38.160 ********** 2026-03-30 00:42:52.805546 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805558 | orchestrator | 2026-03-30 00:42:52.805588 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-30 00:42:52.805599 | orchestrator | Monday 30 March 2026 00:42:48 +0000 (0:00:00.325) 0:00:38.485 ********** 2026-03-30 00:42:52.805612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805638 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805651 | orchestrator | 2026-03-30 00:42:52.805663 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-30 00:42:52.805677 | orchestrator | Monday 30 March 2026 00:42:48 +0000 (0:00:00.165) 0:00:38.651 ********** 2026-03-30 00:42:52.805690 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:52.805703 | orchestrator | 2026-03-30 00:42:52.805716 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-30 00:42:52.805771 | orchestrator | Monday 30 March 2026 00:42:48 +0000 (0:00:00.134) 0:00:38.786 ********** 2026-03-30 00:42:52.805785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805810 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805824 | orchestrator | 2026-03-30 00:42:52.805837 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-30 00:42:52.805849 | orchestrator | Monday 30 March 2026 00:42:48 +0000 (0:00:00.177) 0:00:38.963 ********** 2026-03-30 00:42:52.805861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805888 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805900 | orchestrator | 2026-03-30 00:42:52.805912 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-30 00:42:52.805944 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.164) 0:00:39.128 ********** 2026-03-30 00:42:52.805957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:52.805970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:52.805984 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.805995 | orchestrator | 2026-03-30 00:42:52.806006 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-30 00:42:52.806076 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.146) 0:00:39.274 ********** 2026-03-30 00:42:52.806102 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806122 | orchestrator | 2026-03-30 00:42:52.806141 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-30 00:42:52.806160 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.187) 0:00:39.461 ********** 2026-03-30 00:42:52.806190 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806201 | orchestrator | 2026-03-30 00:42:52.806212 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-30 00:42:52.806229 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.151) 0:00:39.613 ********** 2026-03-30 00:42:52.806240 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806251 | orchestrator | 2026-03-30 00:42:52.806262 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-30 00:42:52.806272 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.132) 0:00:39.746 ********** 2026-03-30 00:42:52.806283 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 00:42:52.806295 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-30 00:42:52.806306 | orchestrator | } 2026-03-30 00:42:52.806317 | orchestrator | 2026-03-30 00:42:52.806328 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-30 00:42:52.806339 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.142) 0:00:39.889 ********** 2026-03-30 00:42:52.806350 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 00:42:52.806360 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-30 00:42:52.806371 | orchestrator | } 2026-03-30 00:42:52.806382 | orchestrator | 2026-03-30 00:42:52.806393 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-30 00:42:52.806404 | orchestrator | Monday 30 March 2026 00:42:49 +0000 (0:00:00.138) 0:00:40.027 ********** 2026-03-30 00:42:52.806415 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 00:42:52.806426 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-30 00:42:52.806437 | orchestrator | } 2026-03-30 00:42:52.806448 | orchestrator | 2026-03-30 00:42:52.806458 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-30 00:42:52.806469 | orchestrator | Monday 30 March 2026 00:42:50 +0000 (0:00:00.121) 0:00:40.148 ********** 2026-03-30 00:42:52.806480 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:52.806491 | orchestrator | 2026-03-30 00:42:52.806502 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-30 00:42:52.806513 | orchestrator | Monday 30 March 2026 00:42:50 +0000 (0:00:00.692) 0:00:40.841 ********** 2026-03-30 00:42:52.806524 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:52.806534 | orchestrator | 2026-03-30 00:42:52.806545 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-30 00:42:52.806556 | orchestrator | Monday 30 March 2026 00:42:51 +0000 (0:00:00.526) 0:00:41.367 ********** 2026-03-30 00:42:52.806567 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:52.806577 | orchestrator | 2026-03-30 00:42:52.806588 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-30 00:42:52.806599 | orchestrator | Monday 30 March 2026 00:42:51 +0000 (0:00:00.508) 0:00:41.876 ********** 2026-03-30 00:42:52.806610 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:52.806620 | orchestrator | 2026-03-30 00:42:52.806631 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-30 00:42:52.806642 | orchestrator | Monday 30 March 2026 00:42:51 +0000 (0:00:00.144) 0:00:42.021 ********** 2026-03-30 00:42:52.806652 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806663 | orchestrator | 2026-03-30 00:42:52.806674 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-30 00:42:52.806685 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.111) 0:00:42.132 ********** 2026-03-30 00:42:52.806695 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806706 | orchestrator | 2026-03-30 00:42:52.806745 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-30 00:42:52.806759 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.101) 0:00:42.233 ********** 2026-03-30 00:42:52.806770 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 00:42:52.806781 | orchestrator |  "vgs_report": { 2026-03-30 00:42:52.806793 | orchestrator |  "vg": [] 2026-03-30 00:42:52.806805 | orchestrator |  } 2026-03-30 00:42:52.806816 | orchestrator | } 2026-03-30 00:42:52.806834 | orchestrator | 2026-03-30 00:42:52.806845 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-30 00:42:52.806856 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.136) 0:00:42.370 ********** 2026-03-30 00:42:52.806867 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806877 | orchestrator | 2026-03-30 00:42:52.806888 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-30 00:42:52.806899 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.137) 0:00:42.508 ********** 2026-03-30 00:42:52.806910 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806920 | orchestrator | 2026-03-30 00:42:52.806931 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-30 00:42:52.806942 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.140) 0:00:42.648 ********** 2026-03-30 00:42:52.806953 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.806964 | orchestrator | 2026-03-30 00:42:52.806974 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-30 00:42:52.806986 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.120) 0:00:42.769 ********** 2026-03-30 00:42:52.806997 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:52.807008 | orchestrator | 2026-03-30 00:42:52.807028 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-30 00:42:57.276031 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.138) 0:00:42.907 ********** 2026-03-30 00:42:57.276139 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276155 | orchestrator | 2026-03-30 00:42:57.276168 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-30 00:42:57.276180 | orchestrator | Monday 30 March 2026 00:42:52 +0000 (0:00:00.133) 0:00:43.040 ********** 2026-03-30 00:42:57.276191 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276201 | orchestrator | 2026-03-30 00:42:57.276212 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-30 00:42:57.276223 | orchestrator | Monday 30 March 2026 00:42:53 +0000 (0:00:00.330) 0:00:43.371 ********** 2026-03-30 00:42:57.276234 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276244 | orchestrator | 2026-03-30 00:42:57.276255 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-30 00:42:57.276266 | orchestrator | Monday 30 March 2026 00:42:53 +0000 (0:00:00.137) 0:00:43.508 ********** 2026-03-30 00:42:57.276277 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276287 | orchestrator | 2026-03-30 00:42:57.276298 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-30 00:42:57.276309 | orchestrator | Monday 30 March 2026 00:42:53 +0000 (0:00:00.130) 0:00:43.638 ********** 2026-03-30 00:42:57.276336 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276348 | orchestrator | 2026-03-30 00:42:57.276359 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-30 00:42:57.276370 | orchestrator | Monday 30 March 2026 00:42:53 +0000 (0:00:00.135) 0:00:43.774 ********** 2026-03-30 00:42:57.276380 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276391 | orchestrator | 2026-03-30 00:42:57.276402 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-30 00:42:57.276413 | orchestrator | Monday 30 March 2026 00:42:53 +0000 (0:00:00.133) 0:00:43.907 ********** 2026-03-30 00:42:57.276423 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276434 | orchestrator | 2026-03-30 00:42:57.276445 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-30 00:42:57.276456 | orchestrator | Monday 30 March 2026 00:42:53 +0000 (0:00:00.131) 0:00:44.039 ********** 2026-03-30 00:42:57.276467 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276478 | orchestrator | 2026-03-30 00:42:57.276488 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-30 00:42:57.276499 | orchestrator | Monday 30 March 2026 00:42:54 +0000 (0:00:00.130) 0:00:44.170 ********** 2026-03-30 00:42:57.276510 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276547 | orchestrator | 2026-03-30 00:42:57.276558 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-30 00:42:57.276571 | orchestrator | Monday 30 March 2026 00:42:54 +0000 (0:00:00.131) 0:00:44.301 ********** 2026-03-30 00:42:57.276583 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276596 | orchestrator | 2026-03-30 00:42:57.276608 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-30 00:42:57.276620 | orchestrator | Monday 30 March 2026 00:42:54 +0000 (0:00:00.136) 0:00:44.438 ********** 2026-03-30 00:42:57.276634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.276648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.276660 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276672 | orchestrator | 2026-03-30 00:42:57.276684 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-30 00:42:57.276697 | orchestrator | Monday 30 March 2026 00:42:54 +0000 (0:00:00.159) 0:00:44.598 ********** 2026-03-30 00:42:57.276709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.276888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.276906 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276917 | orchestrator | 2026-03-30 00:42:57.276928 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-30 00:42:57.276939 | orchestrator | Monday 30 March 2026 00:42:54 +0000 (0:00:00.149) 0:00:44.747 ********** 2026-03-30 00:42:57.276950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.276961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.276972 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.276983 | orchestrator | 2026-03-30 00:42:57.276994 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-30 00:42:57.277005 | orchestrator | Monday 30 March 2026 00:42:54 +0000 (0:00:00.143) 0:00:44.891 ********** 2026-03-30 00:42:57.277016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.277039 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.277049 | orchestrator | 2026-03-30 00:42:57.277082 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-30 00:42:57.277094 | orchestrator | Monday 30 March 2026 00:42:55 +0000 (0:00:00.342) 0:00:45.234 ********** 2026-03-30 00:42:57.277105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.277127 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.277138 | orchestrator | 2026-03-30 00:42:57.277149 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-30 00:42:57.277160 | orchestrator | Monday 30 March 2026 00:42:55 +0000 (0:00:00.151) 0:00:45.385 ********** 2026-03-30 00:42:57.277183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.277222 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.277233 | orchestrator | 2026-03-30 00:42:57.277244 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-30 00:42:57.277267 | orchestrator | Monday 30 March 2026 00:42:55 +0000 (0:00:00.152) 0:00:45.538 ********** 2026-03-30 00:42:57.277278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.277300 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.277311 | orchestrator | 2026-03-30 00:42:57.277322 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-30 00:42:57.277332 | orchestrator | Monday 30 March 2026 00:42:55 +0000 (0:00:00.152) 0:00:45.690 ********** 2026-03-30 00:42:57.277343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.277365 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.277376 | orchestrator | 2026-03-30 00:42:57.277387 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-30 00:42:57.277398 | orchestrator | Monday 30 March 2026 00:42:55 +0000 (0:00:00.138) 0:00:45.829 ********** 2026-03-30 00:42:57.277409 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:57.277420 | orchestrator | 2026-03-30 00:42:57.277430 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-30 00:42:57.277441 | orchestrator | Monday 30 March 2026 00:42:56 +0000 (0:00:00.488) 0:00:46.318 ********** 2026-03-30 00:42:57.277452 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:57.277463 | orchestrator | 2026-03-30 00:42:57.277474 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-30 00:42:57.277484 | orchestrator | Monday 30 March 2026 00:42:56 +0000 (0:00:00.507) 0:00:46.826 ********** 2026-03-30 00:42:57.277495 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:42:57.277506 | orchestrator | 2026-03-30 00:42:57.277517 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-30 00:42:57.277528 | orchestrator | Monday 30 March 2026 00:42:56 +0000 (0:00:00.143) 0:00:46.970 ********** 2026-03-30 00:42:57.277539 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'vg_name': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'}) 2026-03-30 00:42:57.277551 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'vg_name': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'}) 2026-03-30 00:42:57.277562 | orchestrator | 2026-03-30 00:42:57.277573 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-30 00:42:57.277584 | orchestrator | Monday 30 March 2026 00:42:57 +0000 (0:00:00.177) 0:00:47.147 ********** 2026-03-30 00:42:57.277595 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:42:57.277659 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:42:57.277677 | orchestrator | 2026-03-30 00:42:57.277688 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-30 00:42:57.277699 | orchestrator | Monday 30 March 2026 00:42:57 +0000 (0:00:00.159) 0:00:47.307 ********** 2026-03-30 00:42:57.277710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:42:57.277764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:43:02.685894 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:43:02.686087 | orchestrator | 2026-03-30 00:43:02.686118 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-30 00:43:02.686138 | orchestrator | Monday 30 March 2026 00:42:57 +0000 (0:00:00.161) 0:00:47.468 ********** 2026-03-30 00:43:02.686156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'})  2026-03-30 00:43:02.686298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'})  2026-03-30 00:43:02.686325 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:43:02.686371 | orchestrator | 2026-03-30 00:43:02.686394 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-30 00:43:02.686417 | orchestrator | Monday 30 March 2026 00:42:57 +0000 (0:00:00.128) 0:00:47.597 ********** 2026-03-30 00:43:02.686447 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 00:43:02.686468 | orchestrator |  "lvm_report": { 2026-03-30 00:43:02.686491 | orchestrator |  "lv": [ 2026-03-30 00:43:02.686535 | orchestrator |  { 2026-03-30 00:43:02.686558 | orchestrator |  "lv_name": "osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2", 2026-03-30 00:43:02.686585 | orchestrator |  "vg_name": "ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2" 2026-03-30 00:43:02.686608 | orchestrator |  }, 2026-03-30 00:43:02.686628 | orchestrator |  { 2026-03-30 00:43:02.686655 | orchestrator |  "lv_name": "osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3", 2026-03-30 00:43:02.686678 | orchestrator |  "vg_name": "ceph-ae410091-a002-50e8-b50c-29c9b1a933c3" 2026-03-30 00:43:02.686698 | orchestrator |  } 2026-03-30 00:43:02.686785 | orchestrator |  ], 2026-03-30 00:43:02.686807 | orchestrator |  "pv": [ 2026-03-30 00:43:02.686827 | orchestrator |  { 2026-03-30 00:43:02.686848 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-30 00:43:02.686869 | orchestrator |  "vg_name": "ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2" 2026-03-30 00:43:02.686888 | orchestrator |  }, 2026-03-30 00:43:02.686906 | orchestrator |  { 2026-03-30 00:43:02.686928 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-30 00:43:02.686947 | orchestrator |  "vg_name": "ceph-ae410091-a002-50e8-b50c-29c9b1a933c3" 2026-03-30 00:43:02.686968 | orchestrator |  } 2026-03-30 00:43:02.687012 | orchestrator |  ] 2026-03-30 00:43:02.687033 | orchestrator |  } 2026-03-30 00:43:02.687054 | orchestrator | } 2026-03-30 00:43:02.687074 | orchestrator | 2026-03-30 00:43:02.687095 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-30 00:43:02.687116 | orchestrator | 2026-03-30 00:43:02.687136 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 00:43:02.687157 | orchestrator | Monday 30 March 2026 00:42:57 +0000 (0:00:00.383) 0:00:47.980 ********** 2026-03-30 00:43:02.687177 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-30 00:43:02.687197 | orchestrator | 2026-03-30 00:43:02.687232 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-30 00:43:02.687254 | orchestrator | Monday 30 March 2026 00:42:58 +0000 (0:00:00.264) 0:00:48.245 ********** 2026-03-30 00:43:02.687308 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:02.687330 | orchestrator | 2026-03-30 00:43:02.687348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687365 | orchestrator | Monday 30 March 2026 00:42:58 +0000 (0:00:00.211) 0:00:48.457 ********** 2026-03-30 00:43:02.687402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-30 00:43:02.687419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-30 00:43:02.687436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-30 00:43:02.687459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-30 00:43:02.687476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-30 00:43:02.687491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-30 00:43:02.687506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-30 00:43:02.687522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-30 00:43:02.687540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-30 00:43:02.687557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-30 00:43:02.687575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-30 00:43:02.687594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-30 00:43:02.687612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-30 00:43:02.687630 | orchestrator | 2026-03-30 00:43:02.687648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687667 | orchestrator | Monday 30 March 2026 00:42:58 +0000 (0:00:00.388) 0:00:48.845 ********** 2026-03-30 00:43:02.687679 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.687690 | orchestrator | 2026-03-30 00:43:02.687700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687737 | orchestrator | Monday 30 March 2026 00:42:58 +0000 (0:00:00.178) 0:00:49.024 ********** 2026-03-30 00:43:02.687748 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.687759 | orchestrator | 2026-03-30 00:43:02.687770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687807 | orchestrator | Monday 30 March 2026 00:42:59 +0000 (0:00:00.179) 0:00:49.204 ********** 2026-03-30 00:43:02.687818 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.687829 | orchestrator | 2026-03-30 00:43:02.687840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687850 | orchestrator | Monday 30 March 2026 00:42:59 +0000 (0:00:00.165) 0:00:49.370 ********** 2026-03-30 00:43:02.687861 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.687872 | orchestrator | 2026-03-30 00:43:02.687883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687894 | orchestrator | Monday 30 March 2026 00:42:59 +0000 (0:00:00.193) 0:00:49.564 ********** 2026-03-30 00:43:02.687904 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.687915 | orchestrator | 2026-03-30 00:43:02.687925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687936 | orchestrator | Monday 30 March 2026 00:42:59 +0000 (0:00:00.190) 0:00:49.754 ********** 2026-03-30 00:43:02.687947 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.687958 | orchestrator | 2026-03-30 00:43:02.687969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.687990 | orchestrator | Monday 30 March 2026 00:43:00 +0000 (0:00:00.445) 0:00:50.200 ********** 2026-03-30 00:43:02.688001 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.688023 | orchestrator | 2026-03-30 00:43:02.688034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.688045 | orchestrator | Monday 30 March 2026 00:43:00 +0000 (0:00:00.203) 0:00:50.403 ********** 2026-03-30 00:43:02.688056 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:02.688067 | orchestrator | 2026-03-30 00:43:02.688077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.688088 | orchestrator | Monday 30 March 2026 00:43:00 +0000 (0:00:00.182) 0:00:50.586 ********** 2026-03-30 00:43:02.688099 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0) 2026-03-30 00:43:02.688111 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0) 2026-03-30 00:43:02.688122 | orchestrator | 2026-03-30 00:43:02.688132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.688143 | orchestrator | Monday 30 March 2026 00:43:00 +0000 (0:00:00.392) 0:00:50.978 ********** 2026-03-30 00:43:02.688153 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c) 2026-03-30 00:43:02.688164 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c) 2026-03-30 00:43:02.688175 | orchestrator | 2026-03-30 00:43:02.688185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.688196 | orchestrator | Monday 30 March 2026 00:43:01 +0000 (0:00:00.402) 0:00:51.381 ********** 2026-03-30 00:43:02.688207 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74) 2026-03-30 00:43:02.688217 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74) 2026-03-30 00:43:02.688228 | orchestrator | 2026-03-30 00:43:02.688239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.688249 | orchestrator | Monday 30 March 2026 00:43:01 +0000 (0:00:00.393) 0:00:51.774 ********** 2026-03-30 00:43:02.688260 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57) 2026-03-30 00:43:02.688271 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57) 2026-03-30 00:43:02.688281 | orchestrator | 2026-03-30 00:43:02.688292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-30 00:43:02.688303 | orchestrator | Monday 30 March 2026 00:43:02 +0000 (0:00:00.400) 0:00:52.175 ********** 2026-03-30 00:43:02.688314 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-30 00:43:02.688325 | orchestrator | 2026-03-30 00:43:02.688335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:02.688346 | orchestrator | Monday 30 March 2026 00:43:02 +0000 (0:00:00.312) 0:00:52.487 ********** 2026-03-30 00:43:02.688357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-30 00:43:02.688367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-30 00:43:02.688378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-30 00:43:02.688388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-30 00:43:02.688399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-30 00:43:02.688410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-30 00:43:02.688420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-30 00:43:02.688431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-30 00:43:02.688441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-30 00:43:02.688459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-30 00:43:02.688470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-30 00:43:02.688487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-30 00:43:10.725162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-30 00:43:10.725268 | orchestrator | 2026-03-30 00:43:10.725284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725297 | orchestrator | Monday 30 March 2026 00:43:02 +0000 (0:00:00.377) 0:00:52.865 ********** 2026-03-30 00:43:10.725309 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725322 | orchestrator | 2026-03-30 00:43:10.725333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725344 | orchestrator | Monday 30 March 2026 00:43:02 +0000 (0:00:00.215) 0:00:53.080 ********** 2026-03-30 00:43:10.725355 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725366 | orchestrator | 2026-03-30 00:43:10.725377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725388 | orchestrator | Monday 30 March 2026 00:43:03 +0000 (0:00:00.177) 0:00:53.258 ********** 2026-03-30 00:43:10.725399 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725410 | orchestrator | 2026-03-30 00:43:10.725421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725473 | orchestrator | Monday 30 March 2026 00:43:03 +0000 (0:00:00.452) 0:00:53.710 ********** 2026-03-30 00:43:10.725485 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725496 | orchestrator | 2026-03-30 00:43:10.725507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725518 | orchestrator | Monday 30 March 2026 00:43:03 +0000 (0:00:00.174) 0:00:53.885 ********** 2026-03-30 00:43:10.725529 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725540 | orchestrator | 2026-03-30 00:43:10.725550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725561 | orchestrator | Monday 30 March 2026 00:43:03 +0000 (0:00:00.185) 0:00:54.070 ********** 2026-03-30 00:43:10.725572 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725583 | orchestrator | 2026-03-30 00:43:10.725594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725605 | orchestrator | Monday 30 March 2026 00:43:04 +0000 (0:00:00.194) 0:00:54.264 ********** 2026-03-30 00:43:10.725616 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725626 | orchestrator | 2026-03-30 00:43:10.725637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725648 | orchestrator | Monday 30 March 2026 00:43:04 +0000 (0:00:00.179) 0:00:54.444 ********** 2026-03-30 00:43:10.725659 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725670 | orchestrator | 2026-03-30 00:43:10.725681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725693 | orchestrator | Monday 30 March 2026 00:43:04 +0000 (0:00:00.194) 0:00:54.639 ********** 2026-03-30 00:43:10.725730 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-30 00:43:10.725744 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-30 00:43:10.725757 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-30 00:43:10.725769 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-30 00:43:10.725782 | orchestrator | 2026-03-30 00:43:10.725795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725808 | orchestrator | Monday 30 March 2026 00:43:05 +0000 (0:00:00.617) 0:00:55.257 ********** 2026-03-30 00:43:10.725821 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725850 | orchestrator | 2026-03-30 00:43:10.725863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725904 | orchestrator | Monday 30 March 2026 00:43:05 +0000 (0:00:00.196) 0:00:55.453 ********** 2026-03-30 00:43:10.725918 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725930 | orchestrator | 2026-03-30 00:43:10.725942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.725954 | orchestrator | Monday 30 March 2026 00:43:05 +0000 (0:00:00.185) 0:00:55.638 ********** 2026-03-30 00:43:10.725981 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.725994 | orchestrator | 2026-03-30 00:43:10.726007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-30 00:43:10.726081 | orchestrator | Monday 30 March 2026 00:43:05 +0000 (0:00:00.178) 0:00:55.817 ********** 2026-03-30 00:43:10.726096 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726106 | orchestrator | 2026-03-30 00:43:10.726117 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-30 00:43:10.726161 | orchestrator | Monday 30 March 2026 00:43:05 +0000 (0:00:00.163) 0:00:55.981 ********** 2026-03-30 00:43:10.726172 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726183 | orchestrator | 2026-03-30 00:43:10.726194 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-30 00:43:10.726219 | orchestrator | Monday 30 March 2026 00:43:06 +0000 (0:00:00.245) 0:00:56.227 ********** 2026-03-30 00:43:10.726230 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}}) 2026-03-30 00:43:10.726242 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5c90778-4ce0-5f2b-bfca-518c358a14f4'}}) 2026-03-30 00:43:10.726252 | orchestrator | 2026-03-30 00:43:10.726263 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-30 00:43:10.726275 | orchestrator | Monday 30 March 2026 00:43:06 +0000 (0:00:00.164) 0:00:56.391 ********** 2026-03-30 00:43:10.726287 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}) 2026-03-30 00:43:10.726311 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'}) 2026-03-30 00:43:10.726322 | orchestrator | 2026-03-30 00:43:10.726333 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-30 00:43:10.726363 | orchestrator | Monday 30 March 2026 00:43:08 +0000 (0:00:01.807) 0:00:58.198 ********** 2026-03-30 00:43:10.726374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:10.726387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:10.726398 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726421 | orchestrator | 2026-03-30 00:43:10.726432 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-30 00:43:10.726443 | orchestrator | Monday 30 March 2026 00:43:08 +0000 (0:00:00.150) 0:00:58.349 ********** 2026-03-30 00:43:10.726454 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}) 2026-03-30 00:43:10.726465 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'}) 2026-03-30 00:43:10.726476 | orchestrator | 2026-03-30 00:43:10.726487 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-30 00:43:10.726498 | orchestrator | Monday 30 March 2026 00:43:09 +0000 (0:00:01.347) 0:00:59.696 ********** 2026-03-30 00:43:10.726509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:10.726530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:10.726541 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726552 | orchestrator | 2026-03-30 00:43:10.726563 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-30 00:43:10.726574 | orchestrator | Monday 30 March 2026 00:43:09 +0000 (0:00:00.140) 0:00:59.836 ********** 2026-03-30 00:43:10.726584 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726595 | orchestrator | 2026-03-30 00:43:10.726606 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-30 00:43:10.726617 | orchestrator | Monday 30 March 2026 00:43:09 +0000 (0:00:00.121) 0:00:59.958 ********** 2026-03-30 00:43:10.726627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:10.726638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:10.726649 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726660 | orchestrator | 2026-03-30 00:43:10.726671 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-30 00:43:10.726682 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.148) 0:01:00.106 ********** 2026-03-30 00:43:10.726693 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726722 | orchestrator | 2026-03-30 00:43:10.726734 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-30 00:43:10.726753 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.146) 0:01:00.252 ********** 2026-03-30 00:43:10.726765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:10.726776 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:10.726787 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726798 | orchestrator | 2026-03-30 00:43:10.726809 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-30 00:43:10.726819 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.131) 0:01:00.384 ********** 2026-03-30 00:43:10.726830 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726841 | orchestrator | 2026-03-30 00:43:10.726851 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-30 00:43:10.726862 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.131) 0:01:00.515 ********** 2026-03-30 00:43:10.726873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:10.726884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:10.726895 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:10.726906 | orchestrator | 2026-03-30 00:43:10.726916 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-30 00:43:10.726927 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.141) 0:01:00.657 ********** 2026-03-30 00:43:10.726938 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:10.726949 | orchestrator | 2026-03-30 00:43:10.726967 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-30 00:43:10.726984 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.120) 0:01:00.778 ********** 2026-03-30 00:43:10.727012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:16.670793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:16.670876 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.670884 | orchestrator | 2026-03-30 00:43:16.670890 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-30 00:43:16.670896 | orchestrator | Monday 30 March 2026 00:43:10 +0000 (0:00:00.266) 0:01:01.045 ********** 2026-03-30 00:43:16.670902 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:16.670907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:16.670912 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.670917 | orchestrator | 2026-03-30 00:43:16.670933 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-30 00:43:16.670938 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.137) 0:01:01.182 ********** 2026-03-30 00:43:16.670943 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:16.670948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:16.670953 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.670957 | orchestrator | 2026-03-30 00:43:16.670962 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-30 00:43:16.670966 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.153) 0:01:01.335 ********** 2026-03-30 00:43:16.670971 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.670976 | orchestrator | 2026-03-30 00:43:16.670980 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-30 00:43:16.670985 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.115) 0:01:01.451 ********** 2026-03-30 00:43:16.670989 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.670994 | orchestrator | 2026-03-30 00:43:16.670999 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-30 00:43:16.671003 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.108) 0:01:01.560 ********** 2026-03-30 00:43:16.671008 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671013 | orchestrator | 2026-03-30 00:43:16.671018 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-30 00:43:16.671022 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.122) 0:01:01.683 ********** 2026-03-30 00:43:16.671027 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 00:43:16.671033 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-30 00:43:16.671038 | orchestrator | } 2026-03-30 00:43:16.671043 | orchestrator | 2026-03-30 00:43:16.671047 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-30 00:43:16.671052 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.133) 0:01:01.817 ********** 2026-03-30 00:43:16.671056 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 00:43:16.671061 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-30 00:43:16.671066 | orchestrator | } 2026-03-30 00:43:16.671070 | orchestrator | 2026-03-30 00:43:16.671075 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-30 00:43:16.671079 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.129) 0:01:01.946 ********** 2026-03-30 00:43:16.671084 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 00:43:16.671091 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-30 00:43:16.671098 | orchestrator | } 2026-03-30 00:43:16.671106 | orchestrator | 2026-03-30 00:43:16.671113 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-30 00:43:16.671120 | orchestrator | Monday 30 March 2026 00:43:11 +0000 (0:00:00.110) 0:01:02.057 ********** 2026-03-30 00:43:16.671148 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:16.671156 | orchestrator | 2026-03-30 00:43:16.671163 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-30 00:43:16.671170 | orchestrator | Monday 30 March 2026 00:43:12 +0000 (0:00:00.491) 0:01:02.549 ********** 2026-03-30 00:43:16.671177 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:16.671184 | orchestrator | 2026-03-30 00:43:16.671192 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-30 00:43:16.671199 | orchestrator | Monday 30 March 2026 00:43:12 +0000 (0:00:00.507) 0:01:03.057 ********** 2026-03-30 00:43:16.671206 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:16.671213 | orchestrator | 2026-03-30 00:43:16.671221 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-30 00:43:16.671228 | orchestrator | Monday 30 March 2026 00:43:13 +0000 (0:00:00.500) 0:01:03.557 ********** 2026-03-30 00:43:16.671235 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:16.671242 | orchestrator | 2026-03-30 00:43:16.671249 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-30 00:43:16.671256 | orchestrator | Monday 30 March 2026 00:43:13 +0000 (0:00:00.277) 0:01:03.835 ********** 2026-03-30 00:43:16.671264 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671271 | orchestrator | 2026-03-30 00:43:16.671278 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-30 00:43:16.671285 | orchestrator | Monday 30 March 2026 00:43:13 +0000 (0:00:00.093) 0:01:03.929 ********** 2026-03-30 00:43:16.671292 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671299 | orchestrator | 2026-03-30 00:43:16.671306 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-30 00:43:16.671314 | orchestrator | Monday 30 March 2026 00:43:13 +0000 (0:00:00.101) 0:01:04.030 ********** 2026-03-30 00:43:16.671321 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 00:43:16.671328 | orchestrator |  "vgs_report": { 2026-03-30 00:43:16.671338 | orchestrator |  "vg": [] 2026-03-30 00:43:16.671359 | orchestrator |  } 2026-03-30 00:43:16.671369 | orchestrator | } 2026-03-30 00:43:16.671377 | orchestrator | 2026-03-30 00:43:16.671385 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-30 00:43:16.671394 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.127) 0:01:04.158 ********** 2026-03-30 00:43:16.671402 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671410 | orchestrator | 2026-03-30 00:43:16.671418 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-30 00:43:16.671427 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.115) 0:01:04.273 ********** 2026-03-30 00:43:16.671435 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671443 | orchestrator | 2026-03-30 00:43:16.671451 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-30 00:43:16.671459 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.138) 0:01:04.412 ********** 2026-03-30 00:43:16.671467 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671475 | orchestrator | 2026-03-30 00:43:16.671484 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-30 00:43:16.671496 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.146) 0:01:04.559 ********** 2026-03-30 00:43:16.671504 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671512 | orchestrator | 2026-03-30 00:43:16.671521 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-30 00:43:16.671529 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.217) 0:01:04.777 ********** 2026-03-30 00:43:16.671537 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671545 | orchestrator | 2026-03-30 00:43:16.671554 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-30 00:43:16.671562 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.132) 0:01:04.909 ********** 2026-03-30 00:43:16.671571 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671585 | orchestrator | 2026-03-30 00:43:16.671593 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-30 00:43:16.671602 | orchestrator | Monday 30 March 2026 00:43:14 +0000 (0:00:00.127) 0:01:05.037 ********** 2026-03-30 00:43:16.671610 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671618 | orchestrator | 2026-03-30 00:43:16.671626 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-30 00:43:16.671634 | orchestrator | Monday 30 March 2026 00:43:15 +0000 (0:00:00.142) 0:01:05.180 ********** 2026-03-30 00:43:16.671642 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671651 | orchestrator | 2026-03-30 00:43:16.671659 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-30 00:43:16.671667 | orchestrator | Monday 30 March 2026 00:43:15 +0000 (0:00:00.147) 0:01:05.327 ********** 2026-03-30 00:43:16.671676 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671684 | orchestrator | 2026-03-30 00:43:16.671706 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-30 00:43:16.671715 | orchestrator | Monday 30 March 2026 00:43:15 +0000 (0:00:00.353) 0:01:05.681 ********** 2026-03-30 00:43:16.671722 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671729 | orchestrator | 2026-03-30 00:43:16.671736 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-30 00:43:16.671744 | orchestrator | Monday 30 March 2026 00:43:15 +0000 (0:00:00.134) 0:01:05.816 ********** 2026-03-30 00:43:16.671751 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671758 | orchestrator | 2026-03-30 00:43:16.671765 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-30 00:43:16.671772 | orchestrator | Monday 30 March 2026 00:43:15 +0000 (0:00:00.138) 0:01:05.954 ********** 2026-03-30 00:43:16.671780 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671787 | orchestrator | 2026-03-30 00:43:16.671794 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-30 00:43:16.671801 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.167) 0:01:06.122 ********** 2026-03-30 00:43:16.671808 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671816 | orchestrator | 2026-03-30 00:43:16.671823 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-30 00:43:16.671830 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.144) 0:01:06.267 ********** 2026-03-30 00:43:16.671837 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671845 | orchestrator | 2026-03-30 00:43:16.671852 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-30 00:43:16.671859 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.138) 0:01:06.405 ********** 2026-03-30 00:43:16.671866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:16.671874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:16.671881 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671888 | orchestrator | 2026-03-30 00:43:16.671895 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-30 00:43:16.671903 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.149) 0:01:06.555 ********** 2026-03-30 00:43:16.671910 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:16.671917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:16.671925 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:16.671932 | orchestrator | 2026-03-30 00:43:16.671939 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-30 00:43:16.671952 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.147) 0:01:06.703 ********** 2026-03-30 00:43:16.671964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784588 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784597 | orchestrator | 2026-03-30 00:43:19.784602 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-30 00:43:19.784608 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.161) 0:01:06.864 ********** 2026-03-30 00:43:19.784612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784637 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784641 | orchestrator | 2026-03-30 00:43:19.784645 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-30 00:43:19.784648 | orchestrator | Monday 30 March 2026 00:43:16 +0000 (0:00:00.155) 0:01:07.020 ********** 2026-03-30 00:43:19.784652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784660 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784665 | orchestrator | 2026-03-30 00:43:19.784668 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-30 00:43:19.784672 | orchestrator | Monday 30 March 2026 00:43:17 +0000 (0:00:00.167) 0:01:07.188 ********** 2026-03-30 00:43:19.784676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784684 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784708 | orchestrator | 2026-03-30 00:43:19.784712 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-30 00:43:19.784716 | orchestrator | Monday 30 March 2026 00:43:17 +0000 (0:00:00.143) 0:01:07.331 ********** 2026-03-30 00:43:19.784720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784727 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784731 | orchestrator | 2026-03-30 00:43:19.784735 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-30 00:43:19.784738 | orchestrator | Monday 30 March 2026 00:43:17 +0000 (0:00:00.376) 0:01:07.707 ********** 2026-03-30 00:43:19.784742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784750 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784769 | orchestrator | 2026-03-30 00:43:19.784773 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-30 00:43:19.784777 | orchestrator | Monday 30 March 2026 00:43:17 +0000 (0:00:00.207) 0:01:07.915 ********** 2026-03-30 00:43:19.784781 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:19.784785 | orchestrator | 2026-03-30 00:43:19.784789 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-30 00:43:19.784793 | orchestrator | Monday 30 March 2026 00:43:18 +0000 (0:00:00.494) 0:01:08.409 ********** 2026-03-30 00:43:19.784797 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:19.784800 | orchestrator | 2026-03-30 00:43:19.784804 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-30 00:43:19.784808 | orchestrator | Monday 30 March 2026 00:43:18 +0000 (0:00:00.508) 0:01:08.917 ********** 2026-03-30 00:43:19.784812 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:19.784815 | orchestrator | 2026-03-30 00:43:19.784819 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-30 00:43:19.784823 | orchestrator | Monday 30 March 2026 00:43:18 +0000 (0:00:00.154) 0:01:09.072 ********** 2026-03-30 00:43:19.784827 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'vg_name': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}) 2026-03-30 00:43:19.784832 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'vg_name': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'}) 2026-03-30 00:43:19.784836 | orchestrator | 2026-03-30 00:43:19.784840 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-30 00:43:19.784844 | orchestrator | Monday 30 March 2026 00:43:19 +0000 (0:00:00.174) 0:01:09.246 ********** 2026-03-30 00:43:19.784859 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784863 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784867 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784871 | orchestrator | 2026-03-30 00:43:19.784874 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-30 00:43:19.784878 | orchestrator | Monday 30 March 2026 00:43:19 +0000 (0:00:00.168) 0:01:09.415 ********** 2026-03-30 00:43:19.784882 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784890 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784894 | orchestrator | 2026-03-30 00:43:19.784897 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-30 00:43:19.784901 | orchestrator | Monday 30 March 2026 00:43:19 +0000 (0:00:00.162) 0:01:09.577 ********** 2026-03-30 00:43:19.784905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'})  2026-03-30 00:43:19.784909 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'})  2026-03-30 00:43:19.784913 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:19.784917 | orchestrator | 2026-03-30 00:43:19.784920 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-30 00:43:19.784924 | orchestrator | Monday 30 March 2026 00:43:19 +0000 (0:00:00.157) 0:01:09.735 ********** 2026-03-30 00:43:19.784928 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 00:43:19.784932 | orchestrator |  "lvm_report": { 2026-03-30 00:43:19.784936 | orchestrator |  "lv": [ 2026-03-30 00:43:19.784944 | orchestrator |  { 2026-03-30 00:43:19.784948 | orchestrator |  "lv_name": "osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f", 2026-03-30 00:43:19.784952 | orchestrator |  "vg_name": "ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f" 2026-03-30 00:43:19.784956 | orchestrator |  }, 2026-03-30 00:43:19.784960 | orchestrator |  { 2026-03-30 00:43:19.784964 | orchestrator |  "lv_name": "osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4", 2026-03-30 00:43:19.784967 | orchestrator |  "vg_name": "ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4" 2026-03-30 00:43:19.784971 | orchestrator |  } 2026-03-30 00:43:19.784975 | orchestrator |  ], 2026-03-30 00:43:19.784979 | orchestrator |  "pv": [ 2026-03-30 00:43:19.784983 | orchestrator |  { 2026-03-30 00:43:19.784986 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-30 00:43:19.784990 | orchestrator |  "vg_name": "ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f" 2026-03-30 00:43:19.784994 | orchestrator |  }, 2026-03-30 00:43:19.784998 | orchestrator |  { 2026-03-30 00:43:19.785002 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-30 00:43:19.785005 | orchestrator |  "vg_name": "ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4" 2026-03-30 00:43:19.785009 | orchestrator |  } 2026-03-30 00:43:19.785013 | orchestrator |  ] 2026-03-30 00:43:19.785017 | orchestrator |  } 2026-03-30 00:43:19.785021 | orchestrator | } 2026-03-30 00:43:19.785025 | orchestrator | 2026-03-30 00:43:19.785029 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:43:19.785033 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-30 00:43:19.785037 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-30 00:43:19.785041 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-30 00:43:19.785044 | orchestrator | 2026-03-30 00:43:19.785048 | orchestrator | 2026-03-30 00:43:19.785052 | orchestrator | 2026-03-30 00:43:19.785060 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:43:19.785064 | orchestrator | Monday 30 March 2026 00:43:19 +0000 (0:00:00.143) 0:01:09.878 ********** 2026-03-30 00:43:19.785068 | orchestrator | =============================================================================== 2026-03-30 00:43:19.785072 | orchestrator | Create block VGs -------------------------------------------------------- 5.86s 2026-03-30 00:43:19.785075 | orchestrator | Create block LVs -------------------------------------------------------- 4.19s 2026-03-30 00:43:19.785079 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-03-30 00:43:19.785083 | orchestrator | Add known partitions to the list of available block devices ------------- 1.75s 2026-03-30 00:43:19.785086 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-03-30 00:43:19.785090 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2026-03-30 00:43:19.785094 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2026-03-30 00:43:19.785097 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.47s 2026-03-30 00:43:19.785104 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2026-03-30 00:43:20.213583 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-03-30 00:43:20.213720 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-03-30 00:43:20.213739 | orchestrator | Print LVM report data --------------------------------------------------- 0.83s 2026-03-30 00:43:20.213753 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2026-03-30 00:43:20.213766 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.76s 2026-03-30 00:43:20.213814 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-03-30 00:43:20.213830 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-30 00:43:20.213859 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.66s 2026-03-30 00:43:20.213875 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2026-03-30 00:43:20.213888 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-03-30 00:43:20.213902 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-03-30 00:43:31.676351 | orchestrator | 2026-03-30 00:43:31 | INFO  | Prepare task for execution of facts. 2026-03-30 00:43:31.745547 | orchestrator | 2026-03-30 00:43:31 | INFO  | Task 218ba3e2-9639-41c1-9411-212772403a94 (facts) was prepared for execution. 2026-03-30 00:43:31.745663 | orchestrator | 2026-03-30 00:43:31 | INFO  | It takes a moment until task 218ba3e2-9639-41c1-9411-212772403a94 (facts) has been started and output is visible here. 2026-03-30 00:43:42.758555 | orchestrator | 2026-03-30 00:43:42.758732 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-30 00:43:42.758756 | orchestrator | 2026-03-30 00:43:42.758769 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-30 00:43:42.758782 | orchestrator | Monday 30 March 2026 00:43:34 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-03-30 00:43:42.758796 | orchestrator | ok: [testbed-manager] 2026-03-30 00:43:42.758811 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:43:42.758823 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:43:42.758835 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:43:42.758848 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:43:42.758860 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:43:42.758871 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:42.758884 | orchestrator | 2026-03-30 00:43:42.758894 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-30 00:43:42.758902 | orchestrator | Monday 30 March 2026 00:43:35 +0000 (0:00:01.215) 0:00:01.520 ********** 2026-03-30 00:43:42.758910 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:43:42.758918 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:43:42.758926 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:43:42.758933 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:43:42.758940 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:43:42.758947 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:43:42.758954 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:42.758962 | orchestrator | 2026-03-30 00:43:42.758969 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-30 00:43:42.758976 | orchestrator | 2026-03-30 00:43:42.758984 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 00:43:42.758991 | orchestrator | Monday 30 March 2026 00:43:37 +0000 (0:00:01.184) 0:00:02.704 ********** 2026-03-30 00:43:42.758998 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:43:42.759006 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:43:42.759013 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:43:42.759020 | orchestrator | ok: [testbed-manager] 2026-03-30 00:43:42.759027 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:43:42.759034 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:43:42.759041 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:43:42.759048 | orchestrator | 2026-03-30 00:43:42.759056 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-30 00:43:42.759063 | orchestrator | 2026-03-30 00:43:42.759070 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-30 00:43:42.759077 | orchestrator | Monday 30 March 2026 00:43:41 +0000 (0:00:04.761) 0:00:07.466 ********** 2026-03-30 00:43:42.759085 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:43:42.759092 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:43:42.759124 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:43:42.759133 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:43:42.759141 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:43:42.759149 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:43:42.759157 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:43:42.759165 | orchestrator | 2026-03-30 00:43:42.759173 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:43:42.759182 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759192 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759200 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759207 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759214 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759222 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759229 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:43:42.759236 | orchestrator | 2026-03-30 00:43:42.759243 | orchestrator | 2026-03-30 00:43:42.759251 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:43:42.759258 | orchestrator | Monday 30 March 2026 00:43:42 +0000 (0:00:00.513) 0:00:07.979 ********** 2026-03-30 00:43:42.759265 | orchestrator | =============================================================================== 2026-03-30 00:43:42.759272 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.76s 2026-03-30 00:43:42.759279 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-03-30 00:43:42.759298 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2026-03-30 00:43:42.759306 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-30 00:43:54.331803 | orchestrator | 2026-03-30 00:43:54 | INFO  | Prepare task for execution of frr. 2026-03-30 00:43:54.401257 | orchestrator | 2026-03-30 00:43:54 | INFO  | Task 0b58bfc4-2f0e-4d6b-b4aa-5711abc5b014 (frr) was prepared for execution. 2026-03-30 00:43:54.401371 | orchestrator | 2026-03-30 00:43:54 | INFO  | It takes a moment until task 0b58bfc4-2f0e-4d6b-b4aa-5711abc5b014 (frr) has been started and output is visible here. 2026-03-30 00:44:17.843367 | orchestrator | 2026-03-30 00:44:17.843443 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-30 00:44:17.843451 | orchestrator | 2026-03-30 00:44:17.843457 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-30 00:44:17.843464 | orchestrator | Monday 30 March 2026 00:43:57 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-30 00:44:17.843469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-30 00:44:17.843477 | orchestrator | 2026-03-30 00:44:17.843482 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-30 00:44:17.843487 | orchestrator | Monday 30 March 2026 00:43:57 +0000 (0:00:00.199) 0:00:00.471 ********** 2026-03-30 00:44:17.843493 | orchestrator | changed: [testbed-manager] 2026-03-30 00:44:17.843499 | orchestrator | 2026-03-30 00:44:17.843504 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-30 00:44:17.843527 | orchestrator | Monday 30 March 2026 00:43:59 +0000 (0:00:01.390) 0:00:01.862 ********** 2026-03-30 00:44:17.843533 | orchestrator | changed: [testbed-manager] 2026-03-30 00:44:17.843538 | orchestrator | 2026-03-30 00:44:17.843543 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-30 00:44:17.843548 | orchestrator | Monday 30 March 2026 00:44:07 +0000 (0:00:08.552) 0:00:10.415 ********** 2026-03-30 00:44:17.843553 | orchestrator | ok: [testbed-manager] 2026-03-30 00:44:17.843559 | orchestrator | 2026-03-30 00:44:17.843565 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-30 00:44:17.843570 | orchestrator | Monday 30 March 2026 00:44:08 +0000 (0:00:00.906) 0:00:11.322 ********** 2026-03-30 00:44:17.843575 | orchestrator | changed: [testbed-manager] 2026-03-30 00:44:17.843580 | orchestrator | 2026-03-30 00:44:17.843585 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-30 00:44:17.843590 | orchestrator | Monday 30 March 2026 00:44:09 +0000 (0:00:00.821) 0:00:12.143 ********** 2026-03-30 00:44:17.843595 | orchestrator | ok: [testbed-manager] 2026-03-30 00:44:17.843601 | orchestrator | 2026-03-30 00:44:17.843606 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-30 00:44:17.843611 | orchestrator | Monday 30 March 2026 00:44:10 +0000 (0:00:01.167) 0:00:13.311 ********** 2026-03-30 00:44:17.843616 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:44:17.843621 | orchestrator | 2026-03-30 00:44:17.843626 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-30 00:44:17.843682 | orchestrator | Monday 30 March 2026 00:44:10 +0000 (0:00:00.158) 0:00:13.469 ********** 2026-03-30 00:44:17.843688 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:44:17.843693 | orchestrator | 2026-03-30 00:44:17.843698 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-30 00:44:17.843703 | orchestrator | Monday 30 March 2026 00:44:10 +0000 (0:00:00.225) 0:00:13.694 ********** 2026-03-30 00:44:17.843709 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:44:17.843714 | orchestrator | 2026-03-30 00:44:17.843719 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-30 00:44:17.843725 | orchestrator | Monday 30 March 2026 00:44:11 +0000 (0:00:00.144) 0:00:13.839 ********** 2026-03-30 00:44:17.843730 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:44:17.843735 | orchestrator | 2026-03-30 00:44:17.843741 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-30 00:44:17.843746 | orchestrator | Monday 30 March 2026 00:44:11 +0000 (0:00:00.112) 0:00:13.951 ********** 2026-03-30 00:44:17.843751 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:44:17.843756 | orchestrator | 2026-03-30 00:44:17.843762 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-30 00:44:17.843767 | orchestrator | Monday 30 March 2026 00:44:11 +0000 (0:00:00.136) 0:00:14.087 ********** 2026-03-30 00:44:17.843772 | orchestrator | changed: [testbed-manager] 2026-03-30 00:44:17.843777 | orchestrator | 2026-03-30 00:44:17.843782 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-30 00:44:17.843787 | orchestrator | Monday 30 March 2026 00:44:12 +0000 (0:00:00.879) 0:00:14.967 ********** 2026-03-30 00:44:17.843792 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-30 00:44:17.843798 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-30 00:44:17.843804 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-30 00:44:17.843809 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-30 00:44:17.843815 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-30 00:44:17.843820 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-30 00:44:17.843830 | orchestrator | 2026-03-30 00:44:17.843835 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-30 00:44:17.843851 | orchestrator | Monday 30 March 2026 00:44:15 +0000 (0:00:03.025) 0:00:17.992 ********** 2026-03-30 00:44:17.843856 | orchestrator | ok: [testbed-manager] 2026-03-30 00:44:17.843861 | orchestrator | 2026-03-30 00:44:17.843867 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-30 00:44:17.843872 | orchestrator | Monday 30 March 2026 00:44:16 +0000 (0:00:01.074) 0:00:19.067 ********** 2026-03-30 00:44:17.843877 | orchestrator | changed: [testbed-manager] 2026-03-30 00:44:17.843882 | orchestrator | 2026-03-30 00:44:17.843887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:44:17.843892 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 00:44:17.843898 | orchestrator | 2026-03-30 00:44:17.843903 | orchestrator | 2026-03-30 00:44:17.843921 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:44:17.843926 | orchestrator | Monday 30 March 2026 00:44:17 +0000 (0:00:01.330) 0:00:20.398 ********** 2026-03-30 00:44:17.843932 | orchestrator | =============================================================================== 2026-03-30 00:44:17.843938 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.55s 2026-03-30 00:44:17.843944 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.03s 2026-03-30 00:44:17.843949 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.39s 2026-03-30 00:44:17.843955 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.33s 2026-03-30 00:44:17.843961 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2026-03-30 00:44:17.843967 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.07s 2026-03-30 00:44:17.843973 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.91s 2026-03-30 00:44:17.843978 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.88s 2026-03-30 00:44:17.843984 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.82s 2026-03-30 00:44:17.843990 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.23s 2026-03-30 00:44:17.843995 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-30 00:44:17.844001 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-30 00:44:17.844007 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-03-30 00:44:17.844012 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-30 00:44:17.844018 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.11s 2026-03-30 00:44:17.966440 | orchestrator | 2026-03-30 00:44:17.969681 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 30 00:44:17 UTC 2026 2026-03-30 00:44:17.969740 | orchestrator | 2026-03-30 00:44:19.132034 | orchestrator | 2026-03-30 00:44:19 | INFO  | Collection nutshell is prepared for execution 2026-03-30 00:44:19.246391 | orchestrator | 2026-03-30 00:44:19 | INFO  | A [0] - dotfiles 2026-03-30 00:44:29.282813 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - homer 2026-03-30 00:44:29.282931 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - netdata 2026-03-30 00:44:29.282954 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - openstackclient 2026-03-30 00:44:29.282970 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - phpmyadmin 2026-03-30 00:44:29.282999 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - common 2026-03-30 00:44:29.286821 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- loadbalancer 2026-03-30 00:44:29.286873 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [2] --- opensearch 2026-03-30 00:44:29.287051 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [2] --- mariadb-ng 2026-03-30 00:44:29.287908 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [3] ---- horizon 2026-03-30 00:44:29.288094 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [3] ---- keystone 2026-03-30 00:44:29.288853 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- neutron 2026-03-30 00:44:29.289110 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [5] ------ wait-for-nova 2026-03-30 00:44:29.289545 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [6] ------- octavia 2026-03-30 00:44:29.290973 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- barbican 2026-03-30 00:44:29.291121 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- designate 2026-03-30 00:44:29.291139 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- ironic 2026-03-30 00:44:29.291421 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- placement 2026-03-30 00:44:29.291843 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- magnum 2026-03-30 00:44:29.293439 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- openvswitch 2026-03-30 00:44:29.293943 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [2] --- ovn 2026-03-30 00:44:29.294587 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- memcached 2026-03-30 00:44:29.295005 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- redis 2026-03-30 00:44:29.295159 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- rabbitmq-ng 2026-03-30 00:44:29.296131 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - kubernetes 2026-03-30 00:44:29.299208 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- kubeconfig 2026-03-30 00:44:29.299256 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- copy-kubeconfig 2026-03-30 00:44:29.299580 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [0] - ceph 2026-03-30 00:44:29.302452 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [1] -- ceph-pools 2026-03-30 00:44:29.302541 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [2] --- copy-ceph-keys 2026-03-30 00:44:29.302607 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [3] ---- cephclient 2026-03-30 00:44:29.302832 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-30 00:44:29.303127 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- wait-for-keystone 2026-03-30 00:44:29.303466 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-30 00:44:29.303615 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [5] ------ glance 2026-03-30 00:44:29.303829 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [5] ------ cinder 2026-03-30 00:44:29.304307 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [5] ------ nova 2026-03-30 00:44:29.304847 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [4] ----- prometheus 2026-03-30 00:44:29.304867 | orchestrator | 2026-03-30 00:44:29 | INFO  | A [5] ------ grafana 2026-03-30 00:44:29.532753 | orchestrator | 2026-03-30 00:44:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-30 00:44:29.532821 | orchestrator | 2026-03-30 00:44:29 | INFO  | Tasks are running in the background 2026-03-30 00:44:31.365822 | orchestrator | 2026-03-30 00:44:31 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-30 00:44:33.559210 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state STARTED 2026-03-30 00:44:33.559449 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:33.560265 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:33.561115 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:33.561923 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:33.562696 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:33.563638 | orchestrator | 2026-03-30 00:44:33 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:33.563670 | orchestrator | 2026-03-30 00:44:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:36.605344 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state STARTED 2026-03-30 00:44:36.606343 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:36.606372 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:36.606957 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:36.607524 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:36.614574 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:36.615290 | orchestrator | 2026-03-30 00:44:36 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:36.615316 | orchestrator | 2026-03-30 00:44:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:39.721461 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state STARTED 2026-03-30 00:44:39.721554 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:39.722415 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:39.722802 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:39.723842 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:39.724460 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:39.725908 | orchestrator | 2026-03-30 00:44:39 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:39.725942 | orchestrator | 2026-03-30 00:44:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:42.790798 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state STARTED 2026-03-30 00:44:42.790906 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:42.793960 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:42.794206 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:42.794704 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:42.798350 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:42.801516 | orchestrator | 2026-03-30 00:44:42 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:42.801547 | orchestrator | 2026-03-30 00:44:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:45.902350 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state STARTED 2026-03-30 00:44:45.902433 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:45.902445 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:45.902474 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:45.902484 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:45.902496 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:45.902512 | orchestrator | 2026-03-30 00:44:45 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:45.902527 | orchestrator | 2026-03-30 00:44:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:48.937314 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state STARTED 2026-03-30 00:44:49.120736 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:49.120838 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:49.120853 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:49.120865 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:49.120876 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:49.120887 | orchestrator | 2026-03-30 00:44:48 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:49.120899 | orchestrator | 2026-03-30 00:44:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:52.020437 | orchestrator | 2026-03-30 00:44:52.020593 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-30 00:44:52.020640 | orchestrator | 2026-03-30 00:44:52.020654 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-30 00:44:52.020666 | orchestrator | Monday 30 March 2026 00:44:39 +0000 (0:00:00.345) 0:00:00.345 ********** 2026-03-30 00:44:52.020677 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:44:52.020690 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:44:52.020701 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:44:52.020712 | orchestrator | changed: [testbed-manager] 2026-03-30 00:44:52.020722 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:44:52.020733 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:44:52.020744 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:44:52.020755 | orchestrator | 2026-03-30 00:44:52.020766 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-30 00:44:52.020777 | orchestrator | Monday 30 March 2026 00:44:43 +0000 (0:00:04.406) 0:00:04.752 ********** 2026-03-30 00:44:52.020789 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-30 00:44:52.020800 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-30 00:44:52.020811 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-30 00:44:52.020822 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-30 00:44:52.020833 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-30 00:44:52.020864 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-30 00:44:52.020876 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-30 00:44:52.020887 | orchestrator | 2026-03-30 00:44:52.020905 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-30 00:44:52.020917 | orchestrator | Monday 30 March 2026 00:44:45 +0000 (0:00:02.060) 0:00:06.813 ********** 2026-03-30 00:44:52.020932 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:44.627618', 'end': '2026-03-30 00:44:44.637349', 'delta': '0:00:00.009731', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.020951 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:45.482044', 'end': '2026-03-30 00:44:45.491499', 'delta': '0:00:00.009455', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.020963 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:44.746287', 'end': '2026-03-30 00:44:44.754435', 'delta': '0:00:00.008148', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.021002 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:44.636639', 'end': '2026-03-30 00:44:44.647202', 'delta': '0:00:00.010563', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.021019 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:44.741716', 'end': '2026-03-30 00:44:44.751570', 'delta': '0:00:00.009854', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.021043 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:44.600081', 'end': '2026-03-30 00:44:44.609399', 'delta': '0:00:00.009318', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.021055 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-30 00:44:44.691240', 'end': '2026-03-30 00:44:44.698115', 'delta': '0:00:00.006875', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-30 00:44:52.021067 | orchestrator | 2026-03-30 00:44:52.021078 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-30 00:44:52.021089 | orchestrator | Monday 30 March 2026 00:44:46 +0000 (0:00:00.994) 0:00:07.808 ********** 2026-03-30 00:44:52.021155 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-30 00:44:52.021170 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-30 00:44:52.021181 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-30 00:44:52.021192 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-30 00:44:52.021203 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-30 00:44:52.021214 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-30 00:44:52.021224 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-30 00:44:52.021235 | orchestrator | 2026-03-30 00:44:52.021246 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-30 00:44:52.021257 | orchestrator | Monday 30 March 2026 00:44:48 +0000 (0:00:01.356) 0:00:09.165 ********** 2026-03-30 00:44:52.021268 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-30 00:44:52.021279 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-30 00:44:52.021290 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-30 00:44:52.021301 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-30 00:44:52.021312 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-30 00:44:52.021323 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-30 00:44:52.021334 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-30 00:44:52.021345 | orchestrator | 2026-03-30 00:44:52.021356 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:44:52.021384 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021397 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021408 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021419 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021430 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021441 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021452 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:44:52.021462 | orchestrator | 2026-03-30 00:44:52.021473 | orchestrator | 2026-03-30 00:44:52.021489 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:44:52.021500 | orchestrator | Monday 30 March 2026 00:44:50 +0000 (0:00:02.922) 0:00:12.087 ********** 2026-03-30 00:44:52.021511 | orchestrator | =============================================================================== 2026-03-30 00:44:52.021522 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.41s 2026-03-30 00:44:52.021533 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.92s 2026-03-30 00:44:52.021543 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.06s 2026-03-30 00:44:52.021554 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.36s 2026-03-30 00:44:52.021565 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 0.99s 2026-03-30 00:44:52.021576 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task f3ddb9ee-06e2-44a0-af2a-86f83aaace06 is in state SUCCESS 2026-03-30 00:44:52.021587 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:52.021621 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:52.021731 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:52.022366 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:52.029530 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:52.029646 | orchestrator | 2026-03-30 00:44:52 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:52.029666 | orchestrator | 2026-03-30 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:55.073570 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:55.075402 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:55.077085 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:44:55.086393 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:55.086450 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:55.086673 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:55.090455 | orchestrator | 2026-03-30 00:44:55 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:55.090495 | orchestrator | 2026-03-30 00:44:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:44:58.153849 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:44:58.153970 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:44:58.153995 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:44:58.154831 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:44:58.154869 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:44:58.154883 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:44:58.154894 | orchestrator | 2026-03-30 00:44:58 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:44:58.154906 | orchestrator | 2026-03-30 00:44:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:01.398845 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:01.398948 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:01.398986 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:01.398999 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:01.399010 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:01.399040 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:01.399052 | orchestrator | 2026-03-30 00:45:01 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:01.399063 | orchestrator | 2026-03-30 00:45:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:04.425312 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:04.433229 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:04.439882 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:04.440908 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:04.446919 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:04.447742 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:04.449386 | orchestrator | 2026-03-30 00:45:04 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:04.449418 | orchestrator | 2026-03-30 00:45:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:07.599697 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:07.599814 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:07.599823 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:07.599828 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:07.599832 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:07.599836 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:07.599840 | orchestrator | 2026-03-30 00:45:07 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:07.599844 | orchestrator | 2026-03-30 00:45:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:10.665526 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:10.666516 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:10.667345 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:10.669177 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:10.669668 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:10.671528 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:10.672132 | orchestrator | 2026-03-30 00:45:10 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:10.672158 | orchestrator | 2026-03-30 00:45:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:13.736031 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:13.736198 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:13.736215 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:13.736228 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:13.736239 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:13.736250 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:13.736261 | orchestrator | 2026-03-30 00:45:13 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:13.736273 | orchestrator | 2026-03-30 00:45:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:16.880755 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:16.938899 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:16.939071 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:16.939086 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:16.939098 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:16.939145 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:16.939156 | orchestrator | 2026-03-30 00:45:16 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:16.939167 | orchestrator | 2026-03-30 00:45:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:19.974643 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state STARTED 2026-03-30 00:45:19.974750 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:19.974764 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:19.974776 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:19.974787 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:19.974798 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:19.974809 | orchestrator | 2026-03-30 00:45:19 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:19.974821 | orchestrator | 2026-03-30 00:45:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:23.001545 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task f0e69b1d-ba90-44c2-a867-2e06d4cb4bcd is in state SUCCESS 2026-03-30 00:45:23.002303 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:23.004358 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:23.006148 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:23.007431 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:23.011204 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:23.016408 | orchestrator | 2026-03-30 00:45:23 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:23.016458 | orchestrator | 2026-03-30 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:26.051967 | orchestrator | 2026-03-30 00:45:26 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:26.054343 | orchestrator | 2026-03-30 00:45:26 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:26.057089 | orchestrator | 2026-03-30 00:45:26 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state STARTED 2026-03-30 00:45:26.059877 | orchestrator | 2026-03-30 00:45:26 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:26.063647 | orchestrator | 2026-03-30 00:45:26 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:26.067015 | orchestrator | 2026-03-30 00:45:26 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:26.067061 | orchestrator | 2026-03-30 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:29.119178 | orchestrator | 2026-03-30 00:45:29 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:29.124723 | orchestrator | 2026-03-30 00:45:29 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:29.126206 | orchestrator | 2026-03-30 00:45:29 | INFO  | Task 682e6b2a-4993-482e-aae5-7211d7bf877e is in state SUCCESS 2026-03-30 00:45:29.128355 | orchestrator | 2026-03-30 00:45:29 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:29.130486 | orchestrator | 2026-03-30 00:45:29 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:29.131836 | orchestrator | 2026-03-30 00:45:29 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:29.132003 | orchestrator | 2026-03-30 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:32.181997 | orchestrator | 2026-03-30 00:45:32 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:32.182893 | orchestrator | 2026-03-30 00:45:32 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:32.184958 | orchestrator | 2026-03-30 00:45:32 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:32.187628 | orchestrator | 2026-03-30 00:45:32 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:32.189189 | orchestrator | 2026-03-30 00:45:32 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:32.189548 | orchestrator | 2026-03-30 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:35.273537 | orchestrator | 2026-03-30 00:45:35 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:35.292579 | orchestrator | 2026-03-30 00:45:35 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:35.306433 | orchestrator | 2026-03-30 00:45:35 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:35.318258 | orchestrator | 2026-03-30 00:45:35 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:35.330253 | orchestrator | 2026-03-30 00:45:35 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:35.330900 | orchestrator | 2026-03-30 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:38.404330 | orchestrator | 2026-03-30 00:45:38 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:38.404445 | orchestrator | 2026-03-30 00:45:38 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:38.404461 | orchestrator | 2026-03-30 00:45:38 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:38.404471 | orchestrator | 2026-03-30 00:45:38 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:38.404479 | orchestrator | 2026-03-30 00:45:38 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:38.404494 | orchestrator | 2026-03-30 00:45:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:41.452068 | orchestrator | 2026-03-30 00:45:41 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:41.452619 | orchestrator | 2026-03-30 00:45:41 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:41.453793 | orchestrator | 2026-03-30 00:45:41 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:41.464817 | orchestrator | 2026-03-30 00:45:41 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:41.464903 | orchestrator | 2026-03-30 00:45:41 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:41.464963 | orchestrator | 2026-03-30 00:45:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:44.544884 | orchestrator | 2026-03-30 00:45:44 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:44.546126 | orchestrator | 2026-03-30 00:45:44 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:44.548049 | orchestrator | 2026-03-30 00:45:44 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:44.549781 | orchestrator | 2026-03-30 00:45:44 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:44.551705 | orchestrator | 2026-03-30 00:45:44 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:44.551752 | orchestrator | 2026-03-30 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:47.581179 | orchestrator | 2026-03-30 00:45:47 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:47.581266 | orchestrator | 2026-03-30 00:45:47 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:47.581278 | orchestrator | 2026-03-30 00:45:47 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:47.583335 | orchestrator | 2026-03-30 00:45:47 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:47.583389 | orchestrator | 2026-03-30 00:45:47 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:47.583400 | orchestrator | 2026-03-30 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:50.632658 | orchestrator | 2026-03-30 00:45:50 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:50.634737 | orchestrator | 2026-03-30 00:45:50 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:50.637606 | orchestrator | 2026-03-30 00:45:50 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:50.637670 | orchestrator | 2026-03-30 00:45:50 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:50.637678 | orchestrator | 2026-03-30 00:45:50 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:50.637694 | orchestrator | 2026-03-30 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:53.676168 | orchestrator | 2026-03-30 00:45:53 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:53.677438 | orchestrator | 2026-03-30 00:45:53 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:53.679222 | orchestrator | 2026-03-30 00:45:53 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:53.681054 | orchestrator | 2026-03-30 00:45:53 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:53.681492 | orchestrator | 2026-03-30 00:45:53 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:53.681744 | orchestrator | 2026-03-30 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:56.716782 | orchestrator | 2026-03-30 00:45:56 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:56.716855 | orchestrator | 2026-03-30 00:45:56 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:56.717645 | orchestrator | 2026-03-30 00:45:56 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:56.718936 | orchestrator | 2026-03-30 00:45:56 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:56.722350 | orchestrator | 2026-03-30 00:45:56 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:56.722397 | orchestrator | 2026-03-30 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:45:59.757145 | orchestrator | 2026-03-30 00:45:59 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:45:59.757888 | orchestrator | 2026-03-30 00:45:59 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:45:59.758989 | orchestrator | 2026-03-30 00:45:59 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:45:59.760014 | orchestrator | 2026-03-30 00:45:59 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:45:59.760856 | orchestrator | 2026-03-30 00:45:59 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:45:59.760961 | orchestrator | 2026-03-30 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:02.804219 | orchestrator | 2026-03-30 00:46:02 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:02.805911 | orchestrator | 2026-03-30 00:46:02 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state STARTED 2026-03-30 00:46:02.807940 | orchestrator | 2026-03-30 00:46:02 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:02.811044 | orchestrator | 2026-03-30 00:46:02 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state STARTED 2026-03-30 00:46:02.814676 | orchestrator | 2026-03-30 00:46:02 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:02.814732 | orchestrator | 2026-03-30 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:05.864911 | orchestrator | 2026-03-30 00:46:05 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:05.866112 | orchestrator | 2026-03-30 00:46:05 | INFO  | Task 7b8fd15d-e306-4f86-8f13-32a97c255e3a is in state SUCCESS 2026-03-30 00:46:05.866566 | orchestrator | 2026-03-30 00:46:05.866595 | orchestrator | 2026-03-30 00:46:05.866604 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-30 00:46:05.866613 | orchestrator | 2026-03-30 00:46:05.866620 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-30 00:46:05.866626 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:00.970) 0:00:00.970 ********** 2026-03-30 00:46:05.866630 | orchestrator | ok: [testbed-manager] => { 2026-03-30 00:46:05.866636 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-30 00:46:05.866642 | orchestrator | } 2026-03-30 00:46:05.866646 | orchestrator | 2026-03-30 00:46:05.866650 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-30 00:46:05.866655 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:00.446) 0:00:01.417 ********** 2026-03-30 00:46:05.866659 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.866663 | orchestrator | 2026-03-30 00:46:05.866668 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-30 00:46:05.866672 | orchestrator | Monday 30 March 2026 00:44:43 +0000 (0:00:02.770) 0:00:04.187 ********** 2026-03-30 00:46:05.866676 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-30 00:46:05.866680 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-30 00:46:05.866684 | orchestrator | 2026-03-30 00:46:05.866688 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-30 00:46:05.866691 | orchestrator | Monday 30 March 2026 00:44:45 +0000 (0:00:02.672) 0:00:06.860 ********** 2026-03-30 00:46:05.866727 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.866731 | orchestrator | 2026-03-30 00:46:05.866735 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-30 00:46:05.866739 | orchestrator | Monday 30 March 2026 00:44:48 +0000 (0:00:02.179) 0:00:09.040 ********** 2026-03-30 00:46:05.866743 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.866746 | orchestrator | 2026-03-30 00:46:05.866750 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-30 00:46:05.866754 | orchestrator | Monday 30 March 2026 00:44:50 +0000 (0:00:02.046) 0:00:11.086 ********** 2026-03-30 00:46:05.866768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-30 00:46:05.866772 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.866776 | orchestrator | 2026-03-30 00:46:05.866780 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-30 00:46:05.866783 | orchestrator | Monday 30 March 2026 00:45:16 +0000 (0:00:26.168) 0:00:37.254 ********** 2026-03-30 00:46:05.866787 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.866791 | orchestrator | 2026-03-30 00:46:05.866795 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:46:05.866799 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.866804 | orchestrator | 2026-03-30 00:46:05.866808 | orchestrator | 2026-03-30 00:46:05.866812 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:46:05.866816 | orchestrator | Monday 30 March 2026 00:45:19 +0000 (0:00:03.388) 0:00:40.643 ********** 2026-03-30 00:46:05.866820 | orchestrator | =============================================================================== 2026-03-30 00:46:05.866823 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.17s 2026-03-30 00:46:05.866827 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.39s 2026-03-30 00:46:05.866831 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.77s 2026-03-30 00:46:05.866835 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.67s 2026-03-30 00:46:05.866838 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.18s 2026-03-30 00:46:05.866842 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.05s 2026-03-30 00:46:05.866846 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.45s 2026-03-30 00:46:05.866850 | orchestrator | 2026-03-30 00:46:05.866853 | orchestrator | 2026-03-30 00:46:05.866857 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-30 00:46:05.866861 | orchestrator | 2026-03-30 00:46:05.866865 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-30 00:46:05.866868 | orchestrator | Monday 30 March 2026 00:44:38 +0000 (0:00:00.545) 0:00:00.545 ********** 2026-03-30 00:46:05.866873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-30 00:46:05.866877 | orchestrator | 2026-03-30 00:46:05.866881 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-30 00:46:05.866885 | orchestrator | Monday 30 March 2026 00:44:38 +0000 (0:00:00.372) 0:00:00.918 ********** 2026-03-30 00:46:05.866889 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-30 00:46:05.866892 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-30 00:46:05.866896 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-30 00:46:05.866900 | orchestrator | 2026-03-30 00:46:05.866904 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-30 00:46:05.866908 | orchestrator | Monday 30 March 2026 00:44:41 +0000 (0:00:02.445) 0:00:03.363 ********** 2026-03-30 00:46:05.866915 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.866919 | orchestrator | 2026-03-30 00:46:05.866923 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-30 00:46:05.866927 | orchestrator | Monday 30 March 2026 00:44:45 +0000 (0:00:03.998) 0:00:07.362 ********** 2026-03-30 00:46:05.866939 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-30 00:46:05.866943 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.866947 | orchestrator | 2026-03-30 00:46:05.866951 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-30 00:46:05.866955 | orchestrator | Monday 30 March 2026 00:45:19 +0000 (0:00:34.639) 0:00:42.002 ********** 2026-03-30 00:46:05.866958 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.866962 | orchestrator | 2026-03-30 00:46:05.866966 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-30 00:46:05.866970 | orchestrator | Monday 30 March 2026 00:45:20 +0000 (0:00:00.944) 0:00:42.946 ********** 2026-03-30 00:46:05.866973 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.866977 | orchestrator | 2026-03-30 00:46:05.866981 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-30 00:46:05.866985 | orchestrator | Monday 30 March 2026 00:45:21 +0000 (0:00:00.859) 0:00:43.806 ********** 2026-03-30 00:46:05.866989 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.866992 | orchestrator | 2026-03-30 00:46:05.866996 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-30 00:46:05.867000 | orchestrator | Monday 30 March 2026 00:45:23 +0000 (0:00:01.588) 0:00:45.394 ********** 2026-03-30 00:46:05.867003 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.867007 | orchestrator | 2026-03-30 00:46:05.867011 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-30 00:46:05.867015 | orchestrator | Monday 30 March 2026 00:45:24 +0000 (0:00:00.990) 0:00:46.385 ********** 2026-03-30 00:46:05.867019 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.867022 | orchestrator | 2026-03-30 00:46:05.867026 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-30 00:46:05.867030 | orchestrator | Monday 30 March 2026 00:45:24 +0000 (0:00:00.638) 0:00:47.024 ********** 2026-03-30 00:46:05.867033 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.867037 | orchestrator | 2026-03-30 00:46:05.867041 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:46:05.867045 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.867049 | orchestrator | 2026-03-30 00:46:05.867053 | orchestrator | 2026-03-30 00:46:05.867059 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:46:05.867063 | orchestrator | Monday 30 March 2026 00:45:26 +0000 (0:00:01.986) 0:00:49.011 ********** 2026-03-30 00:46:05.867067 | orchestrator | =============================================================================== 2026-03-30 00:46:05.867071 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.64s 2026-03-30 00:46:05.867074 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.00s 2026-03-30 00:46:05.867078 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.44s 2026-03-30 00:46:05.867082 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.99s 2026-03-30 00:46:05.867086 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.59s 2026-03-30 00:46:05.867089 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.99s 2026-03-30 00:46:05.867093 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.94s 2026-03-30 00:46:05.867097 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.86s 2026-03-30 00:46:05.867100 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.64s 2026-03-30 00:46:05.867108 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2026-03-30 00:46:05.867112 | orchestrator | 2026-03-30 00:46:05.867115 | orchestrator | 2026-03-30 00:46:05.867119 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-30 00:46:05.867123 | orchestrator | 2026-03-30 00:46:05.867127 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-30 00:46:05.867130 | orchestrator | Monday 30 March 2026 00:44:54 +0000 (0:00:00.413) 0:00:00.413 ********** 2026-03-30 00:46:05.867134 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.867138 | orchestrator | 2026-03-30 00:46:05.867141 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-30 00:46:05.867145 | orchestrator | Monday 30 March 2026 00:44:56 +0000 (0:00:02.181) 0:00:02.595 ********** 2026-03-30 00:46:05.867149 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-30 00:46:05.867153 | orchestrator | 2026-03-30 00:46:05.867157 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-30 00:46:05.867160 | orchestrator | Monday 30 March 2026 00:44:57 +0000 (0:00:01.016) 0:00:03.612 ********** 2026-03-30 00:46:05.867164 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.867168 | orchestrator | 2026-03-30 00:46:05.867171 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-30 00:46:05.867175 | orchestrator | Monday 30 March 2026 00:44:59 +0000 (0:00:02.164) 0:00:05.776 ********** 2026-03-30 00:46:05.867179 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-30 00:46:05.867183 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.867186 | orchestrator | 2026-03-30 00:46:05.867190 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-30 00:46:05.867194 | orchestrator | Monday 30 March 2026 00:45:58 +0000 (0:00:58.749) 0:01:04.526 ********** 2026-03-30 00:46:05.867198 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.867201 | orchestrator | 2026-03-30 00:46:05.867206 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:46:05.867210 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.867214 | orchestrator | 2026-03-30 00:46:05.867217 | orchestrator | 2026-03-30 00:46:05.867221 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:46:05.867227 | orchestrator | Monday 30 March 2026 00:46:03 +0000 (0:00:05.143) 0:01:09.669 ********** 2026-03-30 00:46:05.867231 | orchestrator | =============================================================================== 2026-03-30 00:46:05.867245 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.75s 2026-03-30 00:46:05.867250 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.15s 2026-03-30 00:46:05.867256 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.18s 2026-03-30 00:46:05.867262 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.16s 2026-03-30 00:46:05.867268 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.02s 2026-03-30 00:46:05.868190 | orchestrator | 2026-03-30 00:46:05 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:05.869881 | orchestrator | 2026-03-30 00:46:05.869912 | orchestrator | 2026-03-30 00:46:05.869917 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:46:05.869922 | orchestrator | 2026-03-30 00:46:05.869927 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:46:05.869931 | orchestrator | Monday 30 March 2026 00:44:39 +0000 (0:00:00.792) 0:00:00.792 ********** 2026-03-30 00:46:05.869935 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-30 00:46:05.869940 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-30 00:46:05.869944 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-30 00:46:05.869959 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-30 00:46:05.869963 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-30 00:46:05.869967 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-30 00:46:05.869971 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-30 00:46:05.869975 | orchestrator | 2026-03-30 00:46:05.869979 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-30 00:46:05.869982 | orchestrator | 2026-03-30 00:46:05.869994 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-30 00:46:05.869998 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:01.485) 0:00:02.278 ********** 2026-03-30 00:46:05.870008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:46:05.870040 | orchestrator | 2026-03-30 00:46:05.870045 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-30 00:46:05.870049 | orchestrator | Monday 30 March 2026 00:44:41 +0000 (0:00:01.116) 0:00:03.394 ********** 2026-03-30 00:46:05.870053 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:46:05.870058 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:46:05.870062 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:46:05.870066 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:46:05.870070 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:46:05.870073 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:46:05.870077 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.870081 | orchestrator | 2026-03-30 00:46:05.870085 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-30 00:46:05.870089 | orchestrator | Monday 30 March 2026 00:44:44 +0000 (0:00:02.503) 0:00:05.898 ********** 2026-03-30 00:46:05.870093 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:46:05.870097 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:46:05.870101 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:46:05.870104 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:46:05.870108 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:46:05.870112 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:46:05.870115 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.870119 | orchestrator | 2026-03-30 00:46:05.870123 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-30 00:46:05.870127 | orchestrator | Monday 30 March 2026 00:44:47 +0000 (0:00:02.803) 0:00:08.701 ********** 2026-03-30 00:46:05.870131 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.870135 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:46:05.870138 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:46:05.870142 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:46:05.870146 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:46:05.870150 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:46:05.870153 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:46:05.870157 | orchestrator | 2026-03-30 00:46:05.870161 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-30 00:46:05.870165 | orchestrator | Monday 30 March 2026 00:44:48 +0000 (0:00:01.608) 0:00:10.310 ********** 2026-03-30 00:46:05.870168 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:46:05.870172 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:46:05.870176 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:46:05.870180 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:46:05.870183 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:46:05.870188 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:46:05.870194 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.870201 | orchestrator | 2026-03-30 00:46:05.870205 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-30 00:46:05.870209 | orchestrator | Monday 30 March 2026 00:44:59 +0000 (0:00:10.491) 0:00:20.802 ********** 2026-03-30 00:46:05.870216 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:46:05.870220 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:46:05.870223 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:46:05.870227 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:46:05.870231 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:46:05.870234 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:46:05.870238 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.870242 | orchestrator | 2026-03-30 00:46:05.870246 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-30 00:46:05.870249 | orchestrator | Monday 30 March 2026 00:45:39 +0000 (0:00:39.968) 0:01:00.770 ********** 2026-03-30 00:46:05.870254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:46:05.870259 | orchestrator | 2026-03-30 00:46:05.870263 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-30 00:46:05.870267 | orchestrator | Monday 30 March 2026 00:45:40 +0000 (0:00:01.524) 0:01:02.295 ********** 2026-03-30 00:46:05.870271 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-30 00:46:05.870275 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-30 00:46:05.870278 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-30 00:46:05.870282 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-30 00:46:05.870294 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-30 00:46:05.870298 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-30 00:46:05.870302 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-30 00:46:05.870306 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-30 00:46:05.870310 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-30 00:46:05.870313 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-30 00:46:05.870317 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-30 00:46:05.870321 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-30 00:46:05.870325 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-30 00:46:05.870328 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-30 00:46:05.870332 | orchestrator | 2026-03-30 00:46:05.870336 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-30 00:46:05.870341 | orchestrator | Monday 30 March 2026 00:45:45 +0000 (0:00:04.670) 0:01:06.965 ********** 2026-03-30 00:46:05.870344 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.870348 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:46:05.870352 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:46:05.870356 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:46:05.870360 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:46:05.870363 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:46:05.870367 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:46:05.870371 | orchestrator | 2026-03-30 00:46:05.870375 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-30 00:46:05.870379 | orchestrator | Monday 30 March 2026 00:45:46 +0000 (0:00:01.234) 0:01:08.200 ********** 2026-03-30 00:46:05.870382 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.870386 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:46:05.870390 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:46:05.870394 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:46:05.870398 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:46:05.870402 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:46:05.870405 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:46:05.870409 | orchestrator | 2026-03-30 00:46:05.870413 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-30 00:46:05.870416 | orchestrator | Monday 30 March 2026 00:45:48 +0000 (0:00:01.285) 0:01:09.486 ********** 2026-03-30 00:46:05.870423 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.870427 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:46:05.870431 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:46:05.870435 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:46:05.870438 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:46:05.870442 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:46:05.870446 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:46:05.870449 | orchestrator | 2026-03-30 00:46:05.870453 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-30 00:46:05.870457 | orchestrator | Monday 30 March 2026 00:45:49 +0000 (0:00:01.318) 0:01:10.804 ********** 2026-03-30 00:46:05.870461 | orchestrator | ok: [testbed-manager] 2026-03-30 00:46:05.870464 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:46:05.870468 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:46:05.870472 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:46:05.870476 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:46:05.870479 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:46:05.870483 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:46:05.870487 | orchestrator | 2026-03-30 00:46:05.870491 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-30 00:46:05.870496 | orchestrator | Monday 30 March 2026 00:45:51 +0000 (0:00:01.834) 0:01:12.638 ********** 2026-03-30 00:46:05.870500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-30 00:46:05.870530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:46:05.870557 | orchestrator | 2026-03-30 00:46:05.870564 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-30 00:46:05.870571 | orchestrator | Monday 30 March 2026 00:45:52 +0000 (0:00:01.320) 0:01:13.959 ********** 2026-03-30 00:46:05.870578 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.870583 | orchestrator | 2026-03-30 00:46:05.870588 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-30 00:46:05.870592 | orchestrator | Monday 30 March 2026 00:45:54 +0000 (0:00:01.713) 0:01:15.672 ********** 2026-03-30 00:46:05.870597 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:46:05.870601 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:46:05.870606 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:46:05.870610 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:46:05.870615 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:46:05.870619 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:46:05.870624 | orchestrator | changed: [testbed-manager] 2026-03-30 00:46:05.870628 | orchestrator | 2026-03-30 00:46:05.870632 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:46:05.870637 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870643 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870647 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870652 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870659 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870663 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870672 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:46:05.870676 | orchestrator | 2026-03-30 00:46:05.870680 | orchestrator | 2026-03-30 00:46:05.870685 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:46:05.870689 | orchestrator | Monday 30 March 2026 00:46:05 +0000 (0:00:11.081) 0:01:26.754 ********** 2026-03-30 00:46:05.870694 | orchestrator | =============================================================================== 2026-03-30 00:46:05.870698 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.97s 2026-03-30 00:46:05.870703 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.08s 2026-03-30 00:46:05.870707 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.49s 2026-03-30 00:46:05.870715 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.67s 2026-03-30 00:46:05.870719 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.80s 2026-03-30 00:46:05.870723 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.50s 2026-03-30 00:46:05.870728 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.83s 2026-03-30 00:46:05.870732 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.71s 2026-03-30 00:46:05.870737 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.61s 2026-03-30 00:46:05.870741 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.52s 2026-03-30 00:46:05.870745 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.49s 2026-03-30 00:46:05.870749 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.32s 2026-03-30 00:46:05.870753 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.32s 2026-03-30 00:46:05.870756 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.29s 2026-03-30 00:46:05.870760 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.23s 2026-03-30 00:46:05.870764 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.12s 2026-03-30 00:46:05.870768 | orchestrator | 2026-03-30 00:46:05 | INFO  | Task 4b220a26-94d6-484d-92bd-93a9e1d7c5c9 is in state SUCCESS 2026-03-30 00:46:05.871061 | orchestrator | 2026-03-30 00:46:05 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:05.871482 | orchestrator | 2026-03-30 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:08.925022 | orchestrator | 2026-03-30 00:46:08 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:08.927829 | orchestrator | 2026-03-30 00:46:08 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:08.929704 | orchestrator | 2026-03-30 00:46:08 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:08.932637 | orchestrator | 2026-03-30 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:11.972001 | orchestrator | 2026-03-30 00:46:11 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:11.973408 | orchestrator | 2026-03-30 00:46:11 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:11.976216 | orchestrator | 2026-03-30 00:46:11 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:11.976259 | orchestrator | 2026-03-30 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:15.027226 | orchestrator | 2026-03-30 00:46:15 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:15.028416 | orchestrator | 2026-03-30 00:46:15 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:15.031264 | orchestrator | 2026-03-30 00:46:15 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:15.031760 | orchestrator | 2026-03-30 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:18.067260 | orchestrator | 2026-03-30 00:46:18 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:18.069855 | orchestrator | 2026-03-30 00:46:18 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:18.070987 | orchestrator | 2026-03-30 00:46:18 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:18.071181 | orchestrator | 2026-03-30 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:21.127291 | orchestrator | 2026-03-30 00:46:21 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:21.128485 | orchestrator | 2026-03-30 00:46:21 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:21.129980 | orchestrator | 2026-03-30 00:46:21 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:21.130091 | orchestrator | 2026-03-30 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:24.197122 | orchestrator | 2026-03-30 00:46:24 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:24.200819 | orchestrator | 2026-03-30 00:46:24 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:24.207762 | orchestrator | 2026-03-30 00:46:24 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:24.207839 | orchestrator | 2026-03-30 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:27.252125 | orchestrator | 2026-03-30 00:46:27 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:27.254856 | orchestrator | 2026-03-30 00:46:27 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:27.259907 | orchestrator | 2026-03-30 00:46:27 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:27.260288 | orchestrator | 2026-03-30 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:30.304033 | orchestrator | 2026-03-30 00:46:30 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:30.311653 | orchestrator | 2026-03-30 00:46:30 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:30.317880 | orchestrator | 2026-03-30 00:46:30 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:30.317992 | orchestrator | 2026-03-30 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:33.352632 | orchestrator | 2026-03-30 00:46:33 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:33.352713 | orchestrator | 2026-03-30 00:46:33 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:33.353044 | orchestrator | 2026-03-30 00:46:33 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:33.353282 | orchestrator | 2026-03-30 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:36.385779 | orchestrator | 2026-03-30 00:46:36 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:36.385858 | orchestrator | 2026-03-30 00:46:36 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:36.385896 | orchestrator | 2026-03-30 00:46:36 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:36.385904 | orchestrator | 2026-03-30 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:39.420679 | orchestrator | 2026-03-30 00:46:39 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:39.421333 | orchestrator | 2026-03-30 00:46:39 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:39.422413 | orchestrator | 2026-03-30 00:46:39 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:39.422460 | orchestrator | 2026-03-30 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:42.461985 | orchestrator | 2026-03-30 00:46:42 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:42.463704 | orchestrator | 2026-03-30 00:46:42 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:42.466393 | orchestrator | 2026-03-30 00:46:42 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:42.466433 | orchestrator | 2026-03-30 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:45.505961 | orchestrator | 2026-03-30 00:46:45 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:45.507142 | orchestrator | 2026-03-30 00:46:45 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:45.508675 | orchestrator | 2026-03-30 00:46:45 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:45.508693 | orchestrator | 2026-03-30 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:48.542354 | orchestrator | 2026-03-30 00:46:48 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:48.542463 | orchestrator | 2026-03-30 00:46:48 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:48.542482 | orchestrator | 2026-03-30 00:46:48 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:48.542499 | orchestrator | 2026-03-30 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:51.578962 | orchestrator | 2026-03-30 00:46:51 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:51.580627 | orchestrator | 2026-03-30 00:46:51 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:51.583230 | orchestrator | 2026-03-30 00:46:51 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:51.583275 | orchestrator | 2026-03-30 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:54.670100 | orchestrator | 2026-03-30 00:46:54 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:54.671575 | orchestrator | 2026-03-30 00:46:54 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:54.673292 | orchestrator | 2026-03-30 00:46:54 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:54.673340 | orchestrator | 2026-03-30 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:46:57.719657 | orchestrator | 2026-03-30 00:46:57 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:46:57.721065 | orchestrator | 2026-03-30 00:46:57 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state STARTED 2026-03-30 00:46:57.722715 | orchestrator | 2026-03-30 00:46:57 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:46:57.722766 | orchestrator | 2026-03-30 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:00.759068 | orchestrator | 2026-03-30 00:47:00 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:00.763034 | orchestrator | 2026-03-30 00:47:00 | INFO  | Task 6017be8a-20ba-443f-897a-9bdc62709eea is in state SUCCESS 2026-03-30 00:47:00.764404 | orchestrator | 2026-03-30 00:47:00.764450 | orchestrator | 2026-03-30 00:47:00.764458 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-30 00:47:00.764465 | orchestrator | 2026-03-30 00:47:00.764472 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-30 00:47:00.764478 | orchestrator | Monday 30 March 2026 00:44:33 +0000 (0:00:00.457) 0:00:00.457 ********** 2026-03-30 00:47:00.764551 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:47:00.764561 | orchestrator | 2026-03-30 00:47:00.764567 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-30 00:47:00.764573 | orchestrator | Monday 30 March 2026 00:44:34 +0000 (0:00:01.281) 0:00:01.738 ********** 2026-03-30 00:47:00.764579 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764586 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764593 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764599 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764604 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764610 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764640 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764647 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764653 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764659 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764664 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-30 00:47:00.764754 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764768 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764775 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764780 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764786 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764792 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764798 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764804 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-30 00:47:00.764810 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764816 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-30 00:47:00.764822 | orchestrator | 2026-03-30 00:47:00.764828 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-30 00:47:00.764834 | orchestrator | Monday 30 March 2026 00:44:38 +0000 (0:00:04.002) 0:00:05.740 ********** 2026-03-30 00:47:00.764855 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:47:00.764862 | orchestrator | 2026-03-30 00:47:00.764868 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-30 00:47:00.764878 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:01.711) 0:00:07.451 ********** 2026-03-30 00:47:00.764888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764937 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.764959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.764966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.764980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.764987 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.764993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.764999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765062 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765068 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.765074 | orchestrator | 2026-03-30 00:47:00.765080 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-30 00:47:00.765087 | orchestrator | Monday 30 March 2026 00:44:47 +0000 (0:00:07.535) 0:00:14.987 ********** 2026-03-30 00:47:00.765093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765110 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765149 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:47:00.765156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765199 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:47:00.765205 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:47:00.765220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765242 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:47:00.765248 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:47:00.765254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765275 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:47:00.765281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765303 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:47:00.765309 | orchestrator | 2026-03-30 00:47:00.765315 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-30 00:47:00.765321 | orchestrator | Monday 30 March 2026 00:44:50 +0000 (0:00:02.467) 0:00:17.455 ********** 2026-03-30 00:47:00.765327 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765351 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:47:00.765357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765383 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:47:00.765402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765437 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:47:00.765446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.765979 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:47:00.765985 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:47:00.765991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.765998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766009 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:47:00.766073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-30 00:47:00.766083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766095 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:47:00.766100 | orchestrator | 2026-03-30 00:47:00.766106 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-30 00:47:00.766112 | orchestrator | Monday 30 March 2026 00:44:53 +0000 (0:00:03.442) 0:00:20.897 ********** 2026-03-30 00:47:00.766117 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:47:00.766123 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:47:00.766129 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:47:00.766139 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:47:00.766144 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:47:00.766156 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:47:00.766162 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:47:00.766167 | orchestrator | 2026-03-30 00:47:00.766173 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-30 00:47:00.766178 | orchestrator | Monday 30 March 2026 00:44:55 +0000 (0:00:01.702) 0:00:22.600 ********** 2026-03-30 00:47:00.766184 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:47:00.766189 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:47:00.766194 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:47:00.766200 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:47:00.766205 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:47:00.766211 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:47:00.766216 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:47:00.766221 | orchestrator | 2026-03-30 00:47:00.766227 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-30 00:47:00.766232 | orchestrator | Monday 30 March 2026 00:44:56 +0000 (0:00:01.084) 0:00:23.684 ********** 2026-03-30 00:47:00.766238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766249 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766285 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766351 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766391 | orchestrator | 2026-03-30 00:47:00.766396 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-30 00:47:00.766402 | orchestrator | Monday 30 March 2026 00:45:03 +0000 (0:00:07.003) 0:00:30.688 ********** 2026-03-30 00:47:00.766408 | orchestrator | [WARNING]: Skipped 2026-03-30 00:47:00.766415 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-30 00:47:00.766420 | orchestrator | to this access issue: 2026-03-30 00:47:00.766426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-30 00:47:00.766432 | orchestrator | directory 2026-03-30 00:47:00.766437 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 00:47:00.766443 | orchestrator | 2026-03-30 00:47:00.766449 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-30 00:47:00.766454 | orchestrator | Monday 30 March 2026 00:45:04 +0000 (0:00:00.937) 0:00:31.625 ********** 2026-03-30 00:47:00.766459 | orchestrator | [WARNING]: Skipped 2026-03-30 00:47:00.766465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-30 00:47:00.766473 | orchestrator | to this access issue: 2026-03-30 00:47:00.766478 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-30 00:47:00.766484 | orchestrator | directory 2026-03-30 00:47:00.766509 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 00:47:00.766515 | orchestrator | 2026-03-30 00:47:00.766520 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-30 00:47:00.766526 | orchestrator | Monday 30 March 2026 00:45:05 +0000 (0:00:00.920) 0:00:32.545 ********** 2026-03-30 00:47:00.766531 | orchestrator | [WARNING]: Skipped 2026-03-30 00:47:00.766537 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-30 00:47:00.766542 | orchestrator | to this access issue: 2026-03-30 00:47:00.766548 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-30 00:47:00.766554 | orchestrator | directory 2026-03-30 00:47:00.766560 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 00:47:00.766567 | orchestrator | 2026-03-30 00:47:00.766573 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-30 00:47:00.766579 | orchestrator | Monday 30 March 2026 00:45:06 +0000 (0:00:00.841) 0:00:33.387 ********** 2026-03-30 00:47:00.766586 | orchestrator | [WARNING]: Skipped 2026-03-30 00:47:00.766592 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-30 00:47:00.766598 | orchestrator | to this access issue: 2026-03-30 00:47:00.766605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-30 00:47:00.766612 | orchestrator | directory 2026-03-30 00:47:00.766618 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 00:47:00.766624 | orchestrator | 2026-03-30 00:47:00.766630 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-30 00:47:00.766637 | orchestrator | Monday 30 March 2026 00:45:07 +0000 (0:00:00.882) 0:00:34.269 ********** 2026-03-30 00:47:00.766644 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.766650 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.766656 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.766662 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.766668 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.766675 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.766681 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.766687 | orchestrator | 2026-03-30 00:47:00.766694 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-30 00:47:00.766706 | orchestrator | Monday 30 March 2026 00:45:11 +0000 (0:00:04.270) 0:00:38.540 ********** 2026-03-30 00:47:00.766712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766719 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766738 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766744 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766750 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-30 00:47:00.766756 | orchestrator | 2026-03-30 00:47:00.766763 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-30 00:47:00.766769 | orchestrator | Monday 30 March 2026 00:45:14 +0000 (0:00:02.530) 0:00:41.071 ********** 2026-03-30 00:47:00.766776 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.766782 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.766788 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.766795 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.766801 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.766807 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.766813 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.766819 | orchestrator | 2026-03-30 00:47:00.766829 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-30 00:47:00.766836 | orchestrator | Monday 30 March 2026 00:45:18 +0000 (0:00:04.377) 0:00:45.448 ********** 2026-03-30 00:47:00.766842 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766877 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766890 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766908 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766915 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766936 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766953 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766962 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766976 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.766982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:47:00.766992 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.766998 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767004 | orchestrator | 2026-03-30 00:47:00.767009 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-30 00:47:00.767015 | orchestrator | Monday 30 March 2026 00:45:21 +0000 (0:00:02.774) 0:00:48.223 ********** 2026-03-30 00:47:00.767020 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767026 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767031 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767037 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767042 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767048 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767053 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-30 00:47:00.767059 | orchestrator | 2026-03-30 00:47:00.767064 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-30 00:47:00.767070 | orchestrator | Monday 30 March 2026 00:45:23 +0000 (0:00:02.726) 0:00:50.950 ********** 2026-03-30 00:47:00.767075 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767081 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767086 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767100 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767105 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767111 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-30 00:47:00.767116 | orchestrator | 2026-03-30 00:47:00.767121 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-30 00:47:00.767127 | orchestrator | Monday 30 March 2026 00:45:26 +0000 (0:00:02.521) 0:00:53.472 ********** 2026-03-30 00:47:00.767132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767146 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767179 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-30 00:47:00.767213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767230 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:47:00.767287 | orchestrator | 2026-03-30 00:47:00.767292 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-30 00:47:00.767298 | orchestrator | Monday 30 March 2026 00:45:29 +0000 (0:00:03.141) 0:00:56.614 ********** 2026-03-30 00:47:00.767303 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.767309 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.767314 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.767320 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.767325 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.767331 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.767336 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.767342 | orchestrator | 2026-03-30 00:47:00.767347 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-30 00:47:00.767353 | orchestrator | Monday 30 March 2026 00:45:31 +0000 (0:00:01.518) 0:00:58.132 ********** 2026-03-30 00:47:00.767358 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.767363 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.767369 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.767378 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.767383 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.767389 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.767397 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.767406 | orchestrator | 2026-03-30 00:47:00.767414 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767426 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:01.512) 0:00:59.644 ********** 2026-03-30 00:47:00.767434 | orchestrator | 2026-03-30 00:47:00.767443 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767451 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.061) 0:00:59.706 ********** 2026-03-30 00:47:00.767459 | orchestrator | 2026-03-30 00:47:00.767468 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767476 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.070) 0:00:59.776 ********** 2026-03-30 00:47:00.767485 | orchestrator | 2026-03-30 00:47:00.767512 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767520 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.061) 0:00:59.838 ********** 2026-03-30 00:47:00.767528 | orchestrator | 2026-03-30 00:47:00.767535 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767543 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.062) 0:00:59.900 ********** 2026-03-30 00:47:00.767552 | orchestrator | 2026-03-30 00:47:00.767562 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767568 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.081) 0:00:59.982 ********** 2026-03-30 00:47:00.767573 | orchestrator | 2026-03-30 00:47:00.767578 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-30 00:47:00.767583 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.070) 0:01:00.052 ********** 2026-03-30 00:47:00.767589 | orchestrator | 2026-03-30 00:47:00.767594 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-30 00:47:00.767604 | orchestrator | Monday 30 March 2026 00:45:33 +0000 (0:00:00.091) 0:01:00.143 ********** 2026-03-30 00:47:00.767609 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.767615 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.767620 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.767625 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.767631 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.767636 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.767641 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.767647 | orchestrator | 2026-03-30 00:47:00.767652 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-30 00:47:00.767658 | orchestrator | Monday 30 March 2026 00:46:07 +0000 (0:00:34.772) 0:01:34.916 ********** 2026-03-30 00:47:00.767663 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.767669 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.767674 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.767680 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.767685 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.767691 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.767696 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.767701 | orchestrator | 2026-03-30 00:47:00.767707 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-30 00:47:00.767712 | orchestrator | Monday 30 March 2026 00:46:48 +0000 (0:00:40.640) 0:02:15.557 ********** 2026-03-30 00:47:00.767718 | orchestrator | ok: [testbed-manager] 2026-03-30 00:47:00.767723 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:47:00.767729 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:47:00.767734 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:47:00.767739 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:47:00.767745 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:47:00.767750 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:47:00.767761 | orchestrator | 2026-03-30 00:47:00.767767 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-30 00:47:00.767772 | orchestrator | Monday 30 March 2026 00:46:50 +0000 (0:00:01.879) 0:02:17.437 ********** 2026-03-30 00:47:00.767778 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:00.767783 | orchestrator | changed: [testbed-manager] 2026-03-30 00:47:00.767789 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:00.767794 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:00.767799 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:47:00.767805 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:47:00.767810 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:47:00.767816 | orchestrator | 2026-03-30 00:47:00.767821 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:47:00.767828 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767834 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767840 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767845 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767851 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767856 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767862 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 00:47:00.767867 | orchestrator | 2026-03-30 00:47:00.767873 | orchestrator | 2026-03-30 00:47:00.767878 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:47:00.767883 | orchestrator | Monday 30 March 2026 00:46:59 +0000 (0:00:09.572) 0:02:27.009 ********** 2026-03-30 00:47:00.767889 | orchestrator | =============================================================================== 2026-03-30 00:47:00.767894 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.64s 2026-03-30 00:47:00.767900 | orchestrator | common : Restart fluentd container ------------------------------------- 34.77s 2026-03-30 00:47:00.767905 | orchestrator | common : Restart cron container ----------------------------------------- 9.57s 2026-03-30 00:47:00.767910 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.54s 2026-03-30 00:47:00.767916 | orchestrator | common : Copying over config.json files for services -------------------- 7.00s 2026-03-30 00:47:00.767921 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.38s 2026-03-30 00:47:00.767926 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.27s 2026-03-30 00:47:00.767932 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.00s 2026-03-30 00:47:00.767937 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.44s 2026-03-30 00:47:00.767943 | orchestrator | common : Check common containers ---------------------------------------- 3.14s 2026-03-30 00:47:00.767948 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.77s 2026-03-30 00:47:00.767953 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.73s 2026-03-30 00:47:00.767959 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.53s 2026-03-30 00:47:00.767964 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.52s 2026-03-30 00:47:00.767978 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.47s 2026-03-30 00:47:00.767984 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.88s 2026-03-30 00:47:00.767989 | orchestrator | common : include_tasks -------------------------------------------------- 1.71s 2026-03-30 00:47:00.767994 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.70s 2026-03-30 00:47:00.768000 | orchestrator | common : Creating log volume -------------------------------------------- 1.52s 2026-03-30 00:47:00.768005 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.51s 2026-03-30 00:47:00.768011 | orchestrator | 2026-03-30 00:47:00 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:00.768021 | orchestrator | 2026-03-30 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:03.797742 | orchestrator | 2026-03-30 00:47:03 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:03.798086 | orchestrator | 2026-03-30 00:47:03 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:03.800339 | orchestrator | 2026-03-30 00:47:03 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state STARTED 2026-03-30 00:47:03.802213 | orchestrator | 2026-03-30 00:47:03 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:03.802802 | orchestrator | 2026-03-30 00:47:03 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:03.804820 | orchestrator | 2026-03-30 00:47:03 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:03.804856 | orchestrator | 2026-03-30 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:06.882932 | orchestrator | 2026-03-30 00:47:06 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:06.883990 | orchestrator | 2026-03-30 00:47:06 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:06.884040 | orchestrator | 2026-03-30 00:47:06 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state STARTED 2026-03-30 00:47:06.884052 | orchestrator | 2026-03-30 00:47:06 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:06.884062 | orchestrator | 2026-03-30 00:47:06 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:06.884073 | orchestrator | 2026-03-30 00:47:06 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:06.884083 | orchestrator | 2026-03-30 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:10.110900 | orchestrator | 2026-03-30 00:47:10 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:10.110982 | orchestrator | 2026-03-30 00:47:10 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:10.110991 | orchestrator | 2026-03-30 00:47:10 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state STARTED 2026-03-30 00:47:10.110998 | orchestrator | 2026-03-30 00:47:10 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:10.111005 | orchestrator | 2026-03-30 00:47:10 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:10.111026 | orchestrator | 2026-03-30 00:47:10 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:10.111033 | orchestrator | 2026-03-30 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:13.155138 | orchestrator | 2026-03-30 00:47:13 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:13.155262 | orchestrator | 2026-03-30 00:47:13 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:13.155285 | orchestrator | 2026-03-30 00:47:13 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state STARTED 2026-03-30 00:47:13.155302 | orchestrator | 2026-03-30 00:47:13 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:13.155315 | orchestrator | 2026-03-30 00:47:13 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:13.155328 | orchestrator | 2026-03-30 00:47:13 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:13.155340 | orchestrator | 2026-03-30 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:16.272766 | orchestrator | 2026-03-30 00:47:16 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:16.278535 | orchestrator | 2026-03-30 00:47:16 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:16.278907 | orchestrator | 2026-03-30 00:47:16 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state STARTED 2026-03-30 00:47:16.279987 | orchestrator | 2026-03-30 00:47:16 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:16.280558 | orchestrator | 2026-03-30 00:47:16 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:16.281602 | orchestrator | 2026-03-30 00:47:16 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:16.281626 | orchestrator | 2026-03-30 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:19.317399 | orchestrator | 2026-03-30 00:47:19 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:19.317815 | orchestrator | 2026-03-30 00:47:19 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:19.320868 | orchestrator | 2026-03-30 00:47:19 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state STARTED 2026-03-30 00:47:19.323046 | orchestrator | 2026-03-30 00:47:19 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:19.325588 | orchestrator | 2026-03-30 00:47:19 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:19.325857 | orchestrator | 2026-03-30 00:47:19 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:19.326266 | orchestrator | 2026-03-30 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:22.448048 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:22.448153 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:22.448168 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task 9b13f45d-1e4e-4a14-ab27-f32ba258bb28 is in state SUCCESS 2026-03-30 00:47:22.448180 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:22.448191 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:22.448202 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:22.448213 | orchestrator | 2026-03-30 00:47:22 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:22.448224 | orchestrator | 2026-03-30 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:25.500310 | orchestrator | 2026-03-30 00:47:25 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:25.500837 | orchestrator | 2026-03-30 00:47:25 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:25.501177 | orchestrator | 2026-03-30 00:47:25 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:25.501626 | orchestrator | 2026-03-30 00:47:25 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:25.502546 | orchestrator | 2026-03-30 00:47:25 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:25.503166 | orchestrator | 2026-03-30 00:47:25 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:25.503182 | orchestrator | 2026-03-30 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:28.539627 | orchestrator | 2026-03-30 00:47:28 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:28.539700 | orchestrator | 2026-03-30 00:47:28 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:28.539994 | orchestrator | 2026-03-30 00:47:28 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state STARTED 2026-03-30 00:47:28.542316 | orchestrator | 2026-03-30 00:47:28 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:28.542384 | orchestrator | 2026-03-30 00:47:28 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:28.542390 | orchestrator | 2026-03-30 00:47:28 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:28.542395 | orchestrator | 2026-03-30 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:31.567748 | orchestrator | 2026-03-30 00:47:31 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:31.568191 | orchestrator | 2026-03-30 00:47:31 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:31.570196 | orchestrator | 2026-03-30 00:47:31 | INFO  | Task 84927bce-2889-4dd4-9395-1e9a938ee16b is in state SUCCESS 2026-03-30 00:47:31.570346 | orchestrator | 2026-03-30 00:47:31.570361 | orchestrator | 2026-03-30 00:47:31.570521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:47:31.570536 | orchestrator | 2026-03-30 00:47:31.570558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:47:31.570576 | orchestrator | Monday 30 March 2026 00:47:04 +0000 (0:00:00.510) 0:00:00.510 ********** 2026-03-30 00:47:31.570587 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:47:31.570599 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:47:31.570609 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:47:31.570618 | orchestrator | 2026-03-30 00:47:31.570628 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:47:31.570638 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.381) 0:00:00.892 ********** 2026-03-30 00:47:31.570649 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-30 00:47:31.570660 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-30 00:47:31.570668 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-30 00:47:31.570678 | orchestrator | 2026-03-30 00:47:31.570687 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-30 00:47:31.570695 | orchestrator | 2026-03-30 00:47:31.570703 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-30 00:47:31.570761 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.395) 0:00:01.288 ********** 2026-03-30 00:47:31.570774 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:47:31.570979 | orchestrator | 2026-03-30 00:47:31.570995 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-30 00:47:31.571004 | orchestrator | Monday 30 March 2026 00:47:06 +0000 (0:00:00.574) 0:00:01.863 ********** 2026-03-30 00:47:31.571013 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-30 00:47:31.571022 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-30 00:47:31.571031 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-30 00:47:31.571045 | orchestrator | 2026-03-30 00:47:31.571054 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-30 00:47:31.571063 | orchestrator | Monday 30 March 2026 00:47:07 +0000 (0:00:01.617) 0:00:03.481 ********** 2026-03-30 00:47:31.571071 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-30 00:47:31.571080 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-30 00:47:31.571088 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-30 00:47:31.571097 | orchestrator | 2026-03-30 00:47:31.571107 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-30 00:47:31.571115 | orchestrator | Monday 30 March 2026 00:47:10 +0000 (0:00:02.419) 0:00:05.901 ********** 2026-03-30 00:47:31.571124 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:31.571133 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:31.571142 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:31.571168 | orchestrator | 2026-03-30 00:47:31.571177 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-30 00:47:31.571187 | orchestrator | Monday 30 March 2026 00:47:13 +0000 (0:00:02.958) 0:00:08.859 ********** 2026-03-30 00:47:31.571197 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:31.571206 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:31.571214 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:31.571223 | orchestrator | 2026-03-30 00:47:31.571232 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:47:31.571240 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:47:31.571266 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:47:31.571275 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:47:31.571285 | orchestrator | 2026-03-30 00:47:31.571298 | orchestrator | 2026-03-30 00:47:31.571306 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:47:31.571315 | orchestrator | Monday 30 March 2026 00:47:19 +0000 (0:00:06.525) 0:00:15.384 ********** 2026-03-30 00:47:31.571323 | orchestrator | =============================================================================== 2026-03-30 00:47:31.571332 | orchestrator | memcached : Restart memcached container --------------------------------- 6.53s 2026-03-30 00:47:31.571341 | orchestrator | memcached : Check memcached container ----------------------------------- 2.96s 2026-03-30 00:47:31.571349 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.42s 2026-03-30 00:47:31.571357 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.62s 2026-03-30 00:47:31.571367 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.57s 2026-03-30 00:47:31.571377 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-03-30 00:47:31.571386 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-30 00:47:31.571394 | orchestrator | 2026-03-30 00:47:31.571412 | orchestrator | 2026-03-30 00:47:31.571422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:47:31.571432 | orchestrator | 2026-03-30 00:47:31.571441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:47:31.571485 | orchestrator | Monday 30 March 2026 00:47:04 +0000 (0:00:00.349) 0:00:00.349 ********** 2026-03-30 00:47:31.571496 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:47:31.571504 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:47:31.571513 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:47:31.571521 | orchestrator | 2026-03-30 00:47:31.571531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:47:31.571540 | orchestrator | Monday 30 March 2026 00:47:04 +0000 (0:00:00.535) 0:00:00.885 ********** 2026-03-30 00:47:31.571550 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-30 00:47:31.571561 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-30 00:47:31.571570 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-30 00:47:31.571581 | orchestrator | 2026-03-30 00:47:31.571592 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-30 00:47:31.571603 | orchestrator | 2026-03-30 00:47:31.571615 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-30 00:47:31.571627 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.556) 0:00:01.441 ********** 2026-03-30 00:47:31.571639 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:47:31.571649 | orchestrator | 2026-03-30 00:47:31.571658 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-30 00:47:31.571668 | orchestrator | Monday 30 March 2026 00:47:06 +0000 (0:00:00.824) 0:00:02.265 ********** 2026-03-30 00:47:31.571682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571785 | orchestrator | 2026-03-30 00:47:31.571794 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-30 00:47:31.571803 | orchestrator | Monday 30 March 2026 00:47:08 +0000 (0:00:02.488) 0:00:04.754 ********** 2026-03-30 00:47:31.571814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571905 | orchestrator | 2026-03-30 00:47:31.571915 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-30 00:47:31.571926 | orchestrator | Monday 30 March 2026 00:47:12 +0000 (0:00:03.715) 0:00:08.470 ********** 2026-03-30 00:47:31.571936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.571974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572154 | orchestrator | 2026-03-30 00:47:31.572163 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-30 00:47:31.572171 | orchestrator | Monday 30 March 2026 00:47:15 +0000 (0:00:02.600) 0:00:11.070 ********** 2026-03-30 00:47:31.572180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-30 00:47:31.572250 | orchestrator | 2026-03-30 00:47:31.572260 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-30 00:47:31.572269 | orchestrator | Monday 30 March 2026 00:47:16 +0000 (0:00:01.789) 0:00:12.860 ********** 2026-03-30 00:47:31.572278 | orchestrator | 2026-03-30 00:47:31.572287 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-30 00:47:31.572296 | orchestrator | Monday 30 March 2026 00:47:17 +0000 (0:00:00.265) 0:00:13.125 ********** 2026-03-30 00:47:31.572305 | orchestrator | 2026-03-30 00:47:31.572314 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-30 00:47:31.572323 | orchestrator | Monday 30 March 2026 00:47:17 +0000 (0:00:00.068) 0:00:13.194 ********** 2026-03-30 00:47:31.572333 | orchestrator | 2026-03-30 00:47:31.572343 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-30 00:47:31.572353 | orchestrator | Monday 30 March 2026 00:47:17 +0000 (0:00:00.139) 0:00:13.334 ********** 2026-03-30 00:47:31.572363 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:31.572373 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:31.572382 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:31.572392 | orchestrator | 2026-03-30 00:47:31.572402 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-30 00:47:31.572412 | orchestrator | Monday 30 March 2026 00:47:25 +0000 (0:00:08.547) 0:00:21.882 ********** 2026-03-30 00:47:31.572422 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:47:31.572432 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:47:31.572442 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:47:31.572451 | orchestrator | 2026-03-30 00:47:31.572522 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:47:31.572532 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:47:31.572543 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:47:31.572552 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:47:31.572561 | orchestrator | 2026-03-30 00:47:31.572569 | orchestrator | 2026-03-30 00:47:31.572579 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:47:31.572605 | orchestrator | Monday 30 March 2026 00:47:29 +0000 (0:00:03.630) 0:00:25.512 ********** 2026-03-30 00:47:31.572615 | orchestrator | =============================================================================== 2026-03-30 00:47:31.572624 | orchestrator | redis : Restart redis container ----------------------------------------- 8.55s 2026-03-30 00:47:31.572632 | orchestrator | redis : Copying over default config.json files -------------------------- 3.72s 2026-03-30 00:47:31.572641 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.63s 2026-03-30 00:47:31.572650 | orchestrator | redis : Copying over redis config files --------------------------------- 2.60s 2026-03-30 00:47:31.572659 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.49s 2026-03-30 00:47:31.572668 | orchestrator | redis : Check redis containers ------------------------------------------ 1.79s 2026-03-30 00:47:31.572677 | orchestrator | redis : include_tasks --------------------------------------------------- 0.83s 2026-03-30 00:47:31.572687 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-03-30 00:47:31.572695 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2026-03-30 00:47:31.572704 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.47s 2026-03-30 00:47:31.572713 | orchestrator | 2026-03-30 00:47:31 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:31.573607 | orchestrator | 2026-03-30 00:47:31 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:31.575202 | orchestrator | 2026-03-30 00:47:31 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:31.575244 | orchestrator | 2026-03-30 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:34.602338 | orchestrator | 2026-03-30 00:47:34 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:34.602801 | orchestrator | 2026-03-30 00:47:34 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:34.605077 | orchestrator | 2026-03-30 00:47:34 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:34.605829 | orchestrator | 2026-03-30 00:47:34 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:34.607788 | orchestrator | 2026-03-30 00:47:34 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:34.607832 | orchestrator | 2026-03-30 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:37.644029 | orchestrator | 2026-03-30 00:47:37 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:37.644107 | orchestrator | 2026-03-30 00:47:37 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:37.645161 | orchestrator | 2026-03-30 00:47:37 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:37.645971 | orchestrator | 2026-03-30 00:47:37 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:37.647420 | orchestrator | 2026-03-30 00:47:37 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:37.647475 | orchestrator | 2026-03-30 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:40.826687 | orchestrator | 2026-03-30 00:47:40 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:40.826775 | orchestrator | 2026-03-30 00:47:40 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:40.826786 | orchestrator | 2026-03-30 00:47:40 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:40.826794 | orchestrator | 2026-03-30 00:47:40 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:40.826822 | orchestrator | 2026-03-30 00:47:40 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:40.826827 | orchestrator | 2026-03-30 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:43.828656 | orchestrator | 2026-03-30 00:47:43 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:43.830168 | orchestrator | 2026-03-30 00:47:43 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:43.832527 | orchestrator | 2026-03-30 00:47:43 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:43.833277 | orchestrator | 2026-03-30 00:47:43 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:43.834340 | orchestrator | 2026-03-30 00:47:43 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:43.834491 | orchestrator | 2026-03-30 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:46.858753 | orchestrator | 2026-03-30 00:47:46 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:46.861013 | orchestrator | 2026-03-30 00:47:46 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:46.862888 | orchestrator | 2026-03-30 00:47:46 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:46.864842 | orchestrator | 2026-03-30 00:47:46 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:46.866311 | orchestrator | 2026-03-30 00:47:46 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:46.866642 | orchestrator | 2026-03-30 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:49.891361 | orchestrator | 2026-03-30 00:47:49 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:49.891864 | orchestrator | 2026-03-30 00:47:49 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:49.892602 | orchestrator | 2026-03-30 00:47:49 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:49.893394 | orchestrator | 2026-03-30 00:47:49 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:49.894129 | orchestrator | 2026-03-30 00:47:49 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:49.894206 | orchestrator | 2026-03-30 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:52.927224 | orchestrator | 2026-03-30 00:47:52 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:52.931502 | orchestrator | 2026-03-30 00:47:52 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:52.932079 | orchestrator | 2026-03-30 00:47:52 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:52.932734 | orchestrator | 2026-03-30 00:47:52 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:52.933506 | orchestrator | 2026-03-30 00:47:52 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:52.933536 | orchestrator | 2026-03-30 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:55.967548 | orchestrator | 2026-03-30 00:47:55 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:55.968530 | orchestrator | 2026-03-30 00:47:55 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:55.969580 | orchestrator | 2026-03-30 00:47:55 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:55.970664 | orchestrator | 2026-03-30 00:47:55 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:55.971782 | orchestrator | 2026-03-30 00:47:55 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:55.971880 | orchestrator | 2026-03-30 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:47:59.100667 | orchestrator | 2026-03-30 00:47:59 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:47:59.100739 | orchestrator | 2026-03-30 00:47:59 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:47:59.103186 | orchestrator | 2026-03-30 00:47:59 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:47:59.104357 | orchestrator | 2026-03-30 00:47:59 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:47:59.106501 | orchestrator | 2026-03-30 00:47:59 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:47:59.106547 | orchestrator | 2026-03-30 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:02.156516 | orchestrator | 2026-03-30 00:48:02 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:02.156600 | orchestrator | 2026-03-30 00:48:02 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:02.158255 | orchestrator | 2026-03-30 00:48:02 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:02.160276 | orchestrator | 2026-03-30 00:48:02 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:48:02.160825 | orchestrator | 2026-03-30 00:48:02 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:02.160912 | orchestrator | 2026-03-30 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:05.199194 | orchestrator | 2026-03-30 00:48:05 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:05.202179 | orchestrator | 2026-03-30 00:48:05 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:05.205003 | orchestrator | 2026-03-30 00:48:05 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:05.207634 | orchestrator | 2026-03-30 00:48:05 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:48:05.209124 | orchestrator | 2026-03-30 00:48:05 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:05.209257 | orchestrator | 2026-03-30 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:08.256700 | orchestrator | 2026-03-30 00:48:08 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:08.258770 | orchestrator | 2026-03-30 00:48:08 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:08.259629 | orchestrator | 2026-03-30 00:48:08 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:08.260660 | orchestrator | 2026-03-30 00:48:08 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state STARTED 2026-03-30 00:48:08.262326 | orchestrator | 2026-03-30 00:48:08 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:08.262423 | orchestrator | 2026-03-30 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:11.318055 | orchestrator | 2026-03-30 00:48:11 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:11.319361 | orchestrator | 2026-03-30 00:48:11 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:11.321311 | orchestrator | 2026-03-30 00:48:11 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:11.324059 | orchestrator | 2026-03-30 00:48:11 | INFO  | Task 23a6f5e8-b99e-4aff-91b0-afe1dc1fce0a is in state SUCCESS 2026-03-30 00:48:11.324902 | orchestrator | 2026-03-30 00:48:11.324931 | orchestrator | 2026-03-30 00:48:11.324937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:48:11.324942 | orchestrator | 2026-03-30 00:48:11.324947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:48:11.324951 | orchestrator | Monday 30 March 2026 00:47:04 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-03-30 00:48:11.324956 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:11.324961 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:11.324965 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:11.324969 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:11.324974 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:11.324978 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:11.324982 | orchestrator | 2026-03-30 00:48:11.324987 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:48:11.324991 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.819) 0:00:01.144 ********** 2026-03-30 00:48:11.324996 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-30 00:48:11.325000 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-30 00:48:11.325005 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-30 00:48:11.325009 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-30 00:48:11.325013 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-30 00:48:11.325017 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-30 00:48:11.325022 | orchestrator | 2026-03-30 00:48:11.325026 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-30 00:48:11.325031 | orchestrator | 2026-03-30 00:48:11.325035 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-30 00:48:11.325040 | orchestrator | Monday 30 March 2026 00:47:06 +0000 (0:00:01.033) 0:00:02.178 ********** 2026-03-30 00:48:11.325045 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:48:11.325050 | orchestrator | 2026-03-30 00:48:11.325054 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-30 00:48:11.325059 | orchestrator | Monday 30 March 2026 00:47:07 +0000 (0:00:01.493) 0:00:03.672 ********** 2026-03-30 00:48:11.325064 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-30 00:48:11.325068 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-30 00:48:11.325073 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-30 00:48:11.325077 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-30 00:48:11.325082 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-30 00:48:11.325086 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-30 00:48:11.325090 | orchestrator | 2026-03-30 00:48:11.325095 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-30 00:48:11.325099 | orchestrator | Monday 30 March 2026 00:47:10 +0000 (0:00:02.690) 0:00:06.362 ********** 2026-03-30 00:48:11.325103 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-30 00:48:11.325107 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-30 00:48:11.325125 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-30 00:48:11.325129 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-30 00:48:11.325133 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-30 00:48:11.325137 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-30 00:48:11.325141 | orchestrator | 2026-03-30 00:48:11.325144 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-30 00:48:11.325148 | orchestrator | Monday 30 March 2026 00:47:12 +0000 (0:00:02.436) 0:00:08.798 ********** 2026-03-30 00:48:11.325152 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-30 00:48:11.325156 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:11.325160 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-30 00:48:11.325164 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:11.325167 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-30 00:48:11.325171 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:11.325175 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-30 00:48:11.325179 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:11.325182 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-30 00:48:11.325186 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:11.325196 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-30 00:48:11.325200 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:11.325204 | orchestrator | 2026-03-30 00:48:11.325208 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-30 00:48:11.325211 | orchestrator | Monday 30 March 2026 00:47:14 +0000 (0:00:01.379) 0:00:10.178 ********** 2026-03-30 00:48:11.325215 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:11.325219 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:11.325223 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:11.325227 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:11.325231 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:11.325235 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:11.325238 | orchestrator | 2026-03-30 00:48:11.325242 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-30 00:48:11.325246 | orchestrator | Monday 30 March 2026 00:47:15 +0000 (0:00:01.151) 0:00:11.329 ********** 2026-03-30 00:48:11.325259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325331 | orchestrator | 2026-03-30 00:48:11.325335 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-30 00:48:11.325339 | orchestrator | Monday 30 March 2026 00:47:17 +0000 (0:00:01.790) 0:00:13.120 ********** 2026-03-30 00:48:11.325343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325423 | orchestrator | 2026-03-30 00:48:11.325427 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-30 00:48:11.325431 | orchestrator | Monday 30 March 2026 00:47:21 +0000 (0:00:03.788) 0:00:16.909 ********** 2026-03-30 00:48:11.325438 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:11.325443 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:11.325447 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:11.325450 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:11.325454 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:11.325457 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:11.325461 | orchestrator | 2026-03-30 00:48:11.325465 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-30 00:48:11.325469 | orchestrator | Monday 30 March 2026 00:47:22 +0000 (0:00:01.349) 0:00:18.258 ********** 2026-03-30 00:48:11.325473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-30 00:48:11.325535 | orchestrator | 2026-03-30 00:48:11.325539 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-30 00:48:11.325543 | orchestrator | Monday 30 March 2026 00:47:25 +0000 (0:00:03.114) 0:00:21.373 ********** 2026-03-30 00:48:11.325547 | orchestrator | 2026-03-30 00:48:11.325551 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-30 00:48:11.325554 | orchestrator | Monday 30 March 2026 00:47:25 +0000 (0:00:00.125) 0:00:21.498 ********** 2026-03-30 00:48:11.325558 | orchestrator | 2026-03-30 00:48:11.325562 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-30 00:48:11.325565 | orchestrator | Monday 30 March 2026 00:47:25 +0000 (0:00:00.149) 0:00:21.648 ********** 2026-03-30 00:48:11.325569 | orchestrator | 2026-03-30 00:48:11.325573 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-30 00:48:11.325576 | orchestrator | Monday 30 March 2026 00:47:25 +0000 (0:00:00.158) 0:00:21.806 ********** 2026-03-30 00:48:11.325580 | orchestrator | 2026-03-30 00:48:11.325584 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-30 00:48:11.325588 | orchestrator | Monday 30 March 2026 00:47:26 +0000 (0:00:00.200) 0:00:22.006 ********** 2026-03-30 00:48:11.325591 | orchestrator | 2026-03-30 00:48:11.325595 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-30 00:48:11.325599 | orchestrator | Monday 30 March 2026 00:47:26 +0000 (0:00:00.220) 0:00:22.226 ********** 2026-03-30 00:48:11.325602 | orchestrator | 2026-03-30 00:48:11.325606 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-30 00:48:11.325610 | orchestrator | Monday 30 March 2026 00:47:26 +0000 (0:00:00.233) 0:00:22.459 ********** 2026-03-30 00:48:11.325614 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:11.325617 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:11.325621 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:11.325625 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:11.325629 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:11.325632 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:11.325636 | orchestrator | 2026-03-30 00:48:11.325640 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-30 00:48:11.325644 | orchestrator | Monday 30 March 2026 00:47:36 +0000 (0:00:10.131) 0:00:32.591 ********** 2026-03-30 00:48:11.325648 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:11.325651 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:11.325655 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:11.325659 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:11.325663 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:11.325669 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:11.325675 | orchestrator | 2026-03-30 00:48:11.325685 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-30 00:48:11.325692 | orchestrator | Monday 30 March 2026 00:47:38 +0000 (0:00:01.460) 0:00:34.051 ********** 2026-03-30 00:48:11.325706 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:11.325719 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:11.325725 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:11.325731 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:11.325736 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:11.325745 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:11.325751 | orchestrator | 2026-03-30 00:48:11.325757 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-30 00:48:11.325763 | orchestrator | Monday 30 March 2026 00:47:48 +0000 (0:00:10.109) 0:00:44.161 ********** 2026-03-30 00:48:11.325769 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-30 00:48:11.325775 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-30 00:48:11.325782 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-30 00:48:11.325788 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-30 00:48:11.325794 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-30 00:48:11.325804 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-30 00:48:11.325811 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-30 00:48:11.325817 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-30 00:48:11.325823 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-30 00:48:11.325829 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-30 00:48:11.325835 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-30 00:48:11.325841 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-30 00:48:11.325846 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-30 00:48:11.325852 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-30 00:48:11.325859 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-30 00:48:11.325865 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-30 00:48:11.325871 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-30 00:48:11.325877 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-30 00:48:11.325883 | orchestrator | 2026-03-30 00:48:11.325890 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-30 00:48:11.325896 | orchestrator | Monday 30 March 2026 00:47:55 +0000 (0:00:06.802) 0:00:50.963 ********** 2026-03-30 00:48:11.325902 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-30 00:48:11.325908 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:11.325913 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-30 00:48:11.325920 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:11.325934 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-30 00:48:11.325941 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:11.325948 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-30 00:48:11.325953 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-30 00:48:11.325960 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-30 00:48:11.325965 | orchestrator | 2026-03-30 00:48:11.325971 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-30 00:48:11.325977 | orchestrator | Monday 30 March 2026 00:47:57 +0000 (0:00:02.545) 0:00:53.509 ********** 2026-03-30 00:48:11.325983 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-30 00:48:11.325989 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:11.325995 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-30 00:48:11.326001 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-30 00:48:11.326006 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:11.326043 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:11.326057 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-30 00:48:11.326064 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-30 00:48:11.326069 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-30 00:48:11.326076 | orchestrator | 2026-03-30 00:48:11.326082 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-30 00:48:11.326088 | orchestrator | Monday 30 March 2026 00:48:02 +0000 (0:00:04.549) 0:00:58.058 ********** 2026-03-30 00:48:11.326094 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:11.326101 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:11.326107 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:11.326113 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:11.326119 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:11.326125 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:11.326130 | orchestrator | 2026-03-30 00:48:11.326135 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:48:11.326147 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:48:11.326154 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:48:11.326160 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:48:11.326166 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 00:48:11.326172 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 00:48:11.326185 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 00:48:11.326192 | orchestrator | 2026-03-30 00:48:11.326198 | orchestrator | 2026-03-30 00:48:11.326204 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:48:11.326211 | orchestrator | Monday 30 March 2026 00:48:10 +0000 (0:00:08.438) 0:01:06.496 ********** 2026-03-30 00:48:11.326217 | orchestrator | =============================================================================== 2026-03-30 00:48:11.326223 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.55s 2026-03-30 00:48:11.326230 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.13s 2026-03-30 00:48:11.326236 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.80s 2026-03-30 00:48:11.326249 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.55s 2026-03-30 00:48:11.326256 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.79s 2026-03-30 00:48:11.326262 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.11s 2026-03-30 00:48:11.326268 | orchestrator | module-load : Load modules ---------------------------------------------- 2.69s 2026-03-30 00:48:11.326275 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.55s 2026-03-30 00:48:11.326281 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.44s 2026-03-30 00:48:11.326288 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.79s 2026-03-30 00:48:11.326292 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.49s 2026-03-30 00:48:11.326296 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.46s 2026-03-30 00:48:11.326300 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.38s 2026-03-30 00:48:11.326304 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.35s 2026-03-30 00:48:11.326307 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.15s 2026-03-30 00:48:11.326311 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.09s 2026-03-30 00:48:11.326315 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.03s 2026-03-30 00:48:11.326318 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-03-30 00:48:11.326624 | orchestrator | 2026-03-30 00:48:11 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:11.326636 | orchestrator | 2026-03-30 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:14.449227 | orchestrator | 2026-03-30 00:48:14 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:14.449696 | orchestrator | 2026-03-30 00:48:14 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:14.450972 | orchestrator | 2026-03-30 00:48:14 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:14.451775 | orchestrator | 2026-03-30 00:48:14 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:14.452928 | orchestrator | 2026-03-30 00:48:14 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:14.452994 | orchestrator | 2026-03-30 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:17.485625 | orchestrator | 2026-03-30 00:48:17 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:17.485756 | orchestrator | 2026-03-30 00:48:17 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:17.490703 | orchestrator | 2026-03-30 00:48:17 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:17.491653 | orchestrator | 2026-03-30 00:48:17 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:17.492560 | orchestrator | 2026-03-30 00:48:17 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:17.493035 | orchestrator | 2026-03-30 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:20.530883 | orchestrator | 2026-03-30 00:48:20 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:20.532724 | orchestrator | 2026-03-30 00:48:20 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:20.533456 | orchestrator | 2026-03-30 00:48:20 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:20.536117 | orchestrator | 2026-03-30 00:48:20 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:20.536563 | orchestrator | 2026-03-30 00:48:20 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:20.536595 | orchestrator | 2026-03-30 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:23.579409 | orchestrator | 2026-03-30 00:48:23 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:23.579507 | orchestrator | 2026-03-30 00:48:23 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:23.580316 | orchestrator | 2026-03-30 00:48:23 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:23.581038 | orchestrator | 2026-03-30 00:48:23 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:23.582676 | orchestrator | 2026-03-30 00:48:23 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:23.582711 | orchestrator | 2026-03-30 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:26.647759 | orchestrator | 2026-03-30 00:48:26 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:26.647848 | orchestrator | 2026-03-30 00:48:26 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:26.647856 | orchestrator | 2026-03-30 00:48:26 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:26.647860 | orchestrator | 2026-03-30 00:48:26 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:26.647865 | orchestrator | 2026-03-30 00:48:26 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:26.647872 | orchestrator | 2026-03-30 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:29.718158 | orchestrator | 2026-03-30 00:48:29 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:29.718329 | orchestrator | 2026-03-30 00:48:29 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:29.719821 | orchestrator | 2026-03-30 00:48:29 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:29.720418 | orchestrator | 2026-03-30 00:48:29 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:29.723431 | orchestrator | 2026-03-30 00:48:29 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:29.723522 | orchestrator | 2026-03-30 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:32.756687 | orchestrator | 2026-03-30 00:48:32 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:32.756786 | orchestrator | 2026-03-30 00:48:32 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:32.758085 | orchestrator | 2026-03-30 00:48:32 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:32.758123 | orchestrator | 2026-03-30 00:48:32 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:32.758287 | orchestrator | 2026-03-30 00:48:32 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:32.758299 | orchestrator | 2026-03-30 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:35.849481 | orchestrator | 2026-03-30 00:48:35 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:35.849547 | orchestrator | 2026-03-30 00:48:35 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:35.850495 | orchestrator | 2026-03-30 00:48:35 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:35.853270 | orchestrator | 2026-03-30 00:48:35 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:35.853312 | orchestrator | 2026-03-30 00:48:35 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:35.853317 | orchestrator | 2026-03-30 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:38.879752 | orchestrator | 2026-03-30 00:48:38 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:38.881104 | orchestrator | 2026-03-30 00:48:38 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:38.887581 | orchestrator | 2026-03-30 00:48:38 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:38.888841 | orchestrator | 2026-03-30 00:48:38 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:38.889757 | orchestrator | 2026-03-30 00:48:38 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state STARTED 2026-03-30 00:48:38.889802 | orchestrator | 2026-03-30 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:41.991749 | orchestrator | 2026-03-30 00:48:41.991805 | orchestrator | 2026-03-30 00:48:41.991813 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-30 00:48:41.991819 | orchestrator | 2026-03-30 00:48:41.991825 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-30 00:48:41.991830 | orchestrator | Monday 30 March 2026 00:44:33 +0000 (0:00:00.296) 0:00:00.296 ********** 2026-03-30 00:48:41.991836 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:41.991842 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:41.991848 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:41.991853 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.991858 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.991863 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.991869 | orchestrator | 2026-03-30 00:48:41.991874 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-30 00:48:41.991879 | orchestrator | Monday 30 March 2026 00:44:34 +0000 (0:00:00.643) 0:00:00.940 ********** 2026-03-30 00:48:41.991885 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.991890 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.991896 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.991901 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.991906 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.991911 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.991916 | orchestrator | 2026-03-30 00:48:41.991922 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-30 00:48:41.991927 | orchestrator | Monday 30 March 2026 00:44:35 +0000 (0:00:00.727) 0:00:01.668 ********** 2026-03-30 00:48:41.991932 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.991937 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.991943 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.991948 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.991953 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.991958 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.991963 | orchestrator | 2026-03-30 00:48:41.991968 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-30 00:48:41.991974 | orchestrator | Monday 30 March 2026 00:44:35 +0000 (0:00:00.549) 0:00:02.218 ********** 2026-03-30 00:48:41.991979 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.991984 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.991989 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.991994 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.992009 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.992014 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.992019 | orchestrator | 2026-03-30 00:48:41.992025 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-30 00:48:41.992030 | orchestrator | Monday 30 March 2026 00:44:37 +0000 (0:00:02.358) 0:00:04.576 ********** 2026-03-30 00:48:41.992035 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.992040 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.992045 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.992050 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.992055 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.992061 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.992066 | orchestrator | 2026-03-30 00:48:41.992072 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-30 00:48:41.992081 | orchestrator | Monday 30 March 2026 00:44:38 +0000 (0:00:00.932) 0:00:05.508 ********** 2026-03-30 00:48:41.992105 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.992113 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.992118 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.992123 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.992129 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.992134 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.992139 | orchestrator | 2026-03-30 00:48:41.992144 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-30 00:48:41.992150 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:01.264) 0:00:06.773 ********** 2026-03-30 00:48:41.992155 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992160 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992165 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992170 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992175 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992181 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992186 | orchestrator | 2026-03-30 00:48:41.992191 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-30 00:48:41.992196 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:00.837) 0:00:07.610 ********** 2026-03-30 00:48:41.992202 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992207 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992212 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992217 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992222 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992232 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992237 | orchestrator | 2026-03-30 00:48:41.992242 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-30 00:48:41.992248 | orchestrator | Monday 30 March 2026 00:44:41 +0000 (0:00:00.564) 0:00:08.175 ********** 2026-03-30 00:48:41.992253 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 00:48:41.992258 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 00:48:41.992263 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992268 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 00:48:41.992274 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 00:48:41.992279 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992284 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 00:48:41.992289 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 00:48:41.992294 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 00:48:41.992300 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 00:48:41.992314 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992339 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 00:48:41.992346 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 00:48:41.992352 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992358 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992364 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 00:48:41.992370 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 00:48:41.992376 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992382 | orchestrator | 2026-03-30 00:48:41.992388 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-30 00:48:41.992394 | orchestrator | Monday 30 March 2026 00:44:42 +0000 (0:00:00.953) 0:00:09.128 ********** 2026-03-30 00:48:41.992400 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992406 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992412 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992418 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992424 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992429 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992434 | orchestrator | 2026-03-30 00:48:41.992439 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-30 00:48:41.992445 | orchestrator | Monday 30 March 2026 00:44:43 +0000 (0:00:01.397) 0:00:10.526 ********** 2026-03-30 00:48:41.992450 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:41.992455 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:41.992460 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:41.992466 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.992471 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.992476 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.992481 | orchestrator | 2026-03-30 00:48:41.992486 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-30 00:48:41.992491 | orchestrator | Monday 30 March 2026 00:44:44 +0000 (0:00:00.763) 0:00:11.290 ********** 2026-03-30 00:48:41.992496 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.992501 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.992507 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.992512 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.992517 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.992522 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.992527 | orchestrator | 2026-03-30 00:48:41.992532 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-30 00:48:41.992537 | orchestrator | Monday 30 March 2026 00:44:51 +0000 (0:00:07.102) 0:00:18.392 ********** 2026-03-30 00:48:41.992542 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992547 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992552 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992558 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992563 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992568 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992573 | orchestrator | 2026-03-30 00:48:41.992578 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-30 00:48:41.992583 | orchestrator | Monday 30 March 2026 00:44:53 +0000 (0:00:01.364) 0:00:19.757 ********** 2026-03-30 00:48:41.992588 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992593 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992598 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992603 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992608 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992614 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992619 | orchestrator | 2026-03-30 00:48:41.992624 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-30 00:48:41.992633 | orchestrator | Monday 30 March 2026 00:44:55 +0000 (0:00:02.699) 0:00:22.456 ********** 2026-03-30 00:48:41.992638 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992643 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992648 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992653 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992659 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992664 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992669 | orchestrator | 2026-03-30 00:48:41.992674 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-30 00:48:41.992679 | orchestrator | Monday 30 March 2026 00:44:57 +0000 (0:00:01.290) 0:00:23.747 ********** 2026-03-30 00:48:41.992684 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-30 00:48:41.992690 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-30 00:48:41.992697 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992703 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-30 00:48:41.992709 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-30 00:48:41.992718 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992726 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-30 00:48:41.992734 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-30 00:48:41.992741 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992749 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-30 00:48:41.992757 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-30 00:48:41.992765 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992773 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-30 00:48:41.992782 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-30 00:48:41.992790 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992799 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-30 00:48:41.992808 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-30 00:48:41.992814 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992820 | orchestrator | 2026-03-30 00:48:41.992830 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-30 00:48:41.992839 | orchestrator | Monday 30 March 2026 00:44:58 +0000 (0:00:01.184) 0:00:24.931 ********** 2026-03-30 00:48:41.992845 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992850 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992856 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992861 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992866 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992871 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992876 | orchestrator | 2026-03-30 00:48:41.992881 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-30 00:48:41.992887 | orchestrator | Monday 30 March 2026 00:44:59 +0000 (0:00:01.232) 0:00:26.164 ********** 2026-03-30 00:48:41.992892 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.992897 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.992902 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.992907 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.992912 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.992917 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.992922 | orchestrator | 2026-03-30 00:48:41.992927 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-30 00:48:41.992933 | orchestrator | 2026-03-30 00:48:41.992938 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-30 00:48:41.992943 | orchestrator | Monday 30 March 2026 00:45:01 +0000 (0:00:01.951) 0:00:28.116 ********** 2026-03-30 00:48:41.992948 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.992953 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.992963 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.992968 | orchestrator | 2026-03-30 00:48:41.992974 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-30 00:48:41.992979 | orchestrator | Monday 30 March 2026 00:45:02 +0000 (0:00:00.833) 0:00:28.950 ********** 2026-03-30 00:48:41.992984 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.992990 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.992998 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993005 | orchestrator | 2026-03-30 00:48:41.993010 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-30 00:48:41.993015 | orchestrator | Monday 30 March 2026 00:45:03 +0000 (0:00:01.121) 0:00:30.071 ********** 2026-03-30 00:48:41.993024 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.993032 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993040 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.993049 | orchestrator | 2026-03-30 00:48:41.993058 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-30 00:48:41.993067 | orchestrator | Monday 30 March 2026 00:45:04 +0000 (0:00:00.938) 0:00:31.010 ********** 2026-03-30 00:48:41.993077 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.993082 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.993087 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993092 | orchestrator | 2026-03-30 00:48:41.993097 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-30 00:48:41.993103 | orchestrator | Monday 30 March 2026 00:45:05 +0000 (0:00:00.953) 0:00:31.963 ********** 2026-03-30 00:48:41.993108 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.993115 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993124 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.993129 | orchestrator | 2026-03-30 00:48:41.993134 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-30 00:48:41.993139 | orchestrator | Monday 30 March 2026 00:45:05 +0000 (0:00:00.374) 0:00:32.338 ********** 2026-03-30 00:48:41.993144 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.993149 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.993157 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.993165 | orchestrator | 2026-03-30 00:48:41.993170 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-30 00:48:41.993175 | orchestrator | Monday 30 March 2026 00:45:06 +0000 (0:00:00.906) 0:00:33.244 ********** 2026-03-30 00:48:41.993183 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.993191 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.993200 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.993209 | orchestrator | 2026-03-30 00:48:41.993217 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-30 00:48:41.993227 | orchestrator | Monday 30 March 2026 00:45:08 +0000 (0:00:01.439) 0:00:34.684 ********** 2026-03-30 00:48:41.993236 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:48:41.993244 | orchestrator | 2026-03-30 00:48:41.993254 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-30 00:48:41.993263 | orchestrator | Monday 30 March 2026 00:45:08 +0000 (0:00:00.584) 0:00:35.269 ********** 2026-03-30 00:48:41.993272 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.993281 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.993293 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993302 | orchestrator | 2026-03-30 00:48:41.993311 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-30 00:48:41.993320 | orchestrator | Monday 30 March 2026 00:45:11 +0000 (0:00:02.465) 0:00:37.734 ********** 2026-03-30 00:48:41.993342 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993352 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.993361 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.993370 | orchestrator | 2026-03-30 00:48:41.993379 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-30 00:48:41.993395 | orchestrator | Monday 30 March 2026 00:45:11 +0000 (0:00:00.552) 0:00:38.287 ********** 2026-03-30 00:48:41.993404 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993413 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.993422 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.993430 | orchestrator | 2026-03-30 00:48:41.993439 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-30 00:48:41.993449 | orchestrator | Monday 30 March 2026 00:45:12 +0000 (0:00:01.018) 0:00:39.305 ********** 2026-03-30 00:48:41.993457 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993466 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.993475 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.993484 | orchestrator | 2026-03-30 00:48:41.993493 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-30 00:48:41.993508 | orchestrator | Monday 30 March 2026 00:45:14 +0000 (0:00:01.622) 0:00:40.928 ********** 2026-03-30 00:48:41.993517 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.993527 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993536 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.993545 | orchestrator | 2026-03-30 00:48:41.993554 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-30 00:48:41.993563 | orchestrator | Monday 30 March 2026 00:45:14 +0000 (0:00:00.604) 0:00:41.533 ********** 2026-03-30 00:48:41.993573 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.993583 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993591 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.993600 | orchestrator | 2026-03-30 00:48:41.993609 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-30 00:48:41.993617 | orchestrator | Monday 30 March 2026 00:45:15 +0000 (0:00:00.546) 0:00:42.079 ********** 2026-03-30 00:48:41.993626 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.993635 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.993643 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.993652 | orchestrator | 2026-03-30 00:48:41.993661 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-30 00:48:41.993670 | orchestrator | Monday 30 March 2026 00:45:18 +0000 (0:00:02.958) 0:00:45.037 ********** 2026-03-30 00:48:41.993678 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993687 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.993696 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.993705 | orchestrator | 2026-03-30 00:48:41.993714 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-30 00:48:41.993722 | orchestrator | Monday 30 March 2026 00:45:21 +0000 (0:00:03.053) 0:00:48.091 ********** 2026-03-30 00:48:41.993732 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.993741 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993751 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.993759 | orchestrator | 2026-03-30 00:48:41.993769 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-30 00:48:41.993778 | orchestrator | Monday 30 March 2026 00:45:22 +0000 (0:00:00.580) 0:00:48.671 ********** 2026-03-30 00:48:41.993788 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-30 00:48:41.993798 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-30 00:48:41.993807 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-30 00:48:41.993818 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-30 00:48:41.993828 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-30 00:48:41.993848 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-30 00:48:41.993858 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-30 00:48:41.993868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-30 00:48:41.993877 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-30 00:48:41.993887 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-30 00:48:41.993895 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-30 00:48:41.993904 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-30 00:48:41.993918 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.993927 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.993937 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.993946 | orchestrator | 2026-03-30 00:48:41.993956 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-30 00:48:41.993965 | orchestrator | Monday 30 March 2026 00:46:05 +0000 (0:00:43.533) 0:01:32.205 ********** 2026-03-30 00:48:41.993975 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.993985 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.993994 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.994003 | orchestrator | 2026-03-30 00:48:41.994079 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-30 00:48:41.994094 | orchestrator | Monday 30 March 2026 00:46:06 +0000 (0:00:00.488) 0:01:32.693 ********** 2026-03-30 00:48:41.994103 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994129 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994140 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994149 | orchestrator | 2026-03-30 00:48:41.994157 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-30 00:48:41.994167 | orchestrator | Monday 30 March 2026 00:46:07 +0000 (0:00:01.290) 0:01:33.983 ********** 2026-03-30 00:48:41.994176 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994186 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994195 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994205 | orchestrator | 2026-03-30 00:48:41.994225 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-30 00:48:41.994235 | orchestrator | Monday 30 March 2026 00:46:08 +0000 (0:00:01.188) 0:01:35.171 ********** 2026-03-30 00:48:41.994245 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994255 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994264 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994274 | orchestrator | 2026-03-30 00:48:41.994283 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-30 00:48:41.994292 | orchestrator | Monday 30 March 2026 00:46:32 +0000 (0:00:23.685) 0:01:58.857 ********** 2026-03-30 00:48:41.994302 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.994312 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.994321 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.994379 | orchestrator | 2026-03-30 00:48:41.994388 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-30 00:48:41.994398 | orchestrator | Monday 30 March 2026 00:46:32 +0000 (0:00:00.708) 0:01:59.566 ********** 2026-03-30 00:48:41.994407 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.994417 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.994427 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.994446 | orchestrator | 2026-03-30 00:48:41.994455 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-30 00:48:41.994464 | orchestrator | Monday 30 March 2026 00:46:33 +0000 (0:00:00.921) 0:02:00.488 ********** 2026-03-30 00:48:41.994474 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994483 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994493 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994502 | orchestrator | 2026-03-30 00:48:41.994511 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-30 00:48:41.994521 | orchestrator | Monday 30 March 2026 00:46:34 +0000 (0:00:00.664) 0:02:01.153 ********** 2026-03-30 00:48:41.994530 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.994539 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.994548 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.994557 | orchestrator | 2026-03-30 00:48:41.994567 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-30 00:48:41.994576 | orchestrator | Monday 30 March 2026 00:46:35 +0000 (0:00:00.610) 0:02:01.763 ********** 2026-03-30 00:48:41.994584 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.994592 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.994599 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.994607 | orchestrator | 2026-03-30 00:48:41.994614 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-30 00:48:41.994621 | orchestrator | Monday 30 March 2026 00:46:35 +0000 (0:00:00.293) 0:02:02.057 ********** 2026-03-30 00:48:41.994629 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994638 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994647 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994655 | orchestrator | 2026-03-30 00:48:41.994663 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-30 00:48:41.994672 | orchestrator | Monday 30 March 2026 00:46:36 +0000 (0:00:00.723) 0:02:02.781 ********** 2026-03-30 00:48:41.994681 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994688 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994693 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994698 | orchestrator | 2026-03-30 00:48:41.994703 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-30 00:48:41.994709 | orchestrator | Monday 30 March 2026 00:46:36 +0000 (0:00:00.746) 0:02:03.527 ********** 2026-03-30 00:48:41.994714 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994719 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994724 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994729 | orchestrator | 2026-03-30 00:48:41.994734 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-30 00:48:41.994739 | orchestrator | Monday 30 March 2026 00:46:37 +0000 (0:00:00.817) 0:02:04.345 ********** 2026-03-30 00:48:41.994744 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:48:41.994749 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:48:41.994754 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:48:41.994759 | orchestrator | 2026-03-30 00:48:41.994764 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-30 00:48:41.994769 | orchestrator | Monday 30 March 2026 00:46:38 +0000 (0:00:00.769) 0:02:05.114 ********** 2026-03-30 00:48:41.994774 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.994779 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.994784 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.994789 | orchestrator | 2026-03-30 00:48:41.994794 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-30 00:48:41.994799 | orchestrator | Monday 30 March 2026 00:46:38 +0000 (0:00:00.493) 0:02:05.608 ********** 2026-03-30 00:48:41.994805 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.994815 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.994820 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.994825 | orchestrator | 2026-03-30 00:48:41.994830 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-30 00:48:41.994839 | orchestrator | Monday 30 March 2026 00:46:39 +0000 (0:00:00.318) 0:02:05.926 ********** 2026-03-30 00:48:41.994844 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.994849 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.994854 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.994859 | orchestrator | 2026-03-30 00:48:41.994863 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-30 00:48:41.994868 | orchestrator | Monday 30 March 2026 00:46:39 +0000 (0:00:00.644) 0:02:06.570 ********** 2026-03-30 00:48:41.994873 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.994878 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.994882 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.994887 | orchestrator | 2026-03-30 00:48:41.994892 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-30 00:48:41.994897 | orchestrator | Monday 30 March 2026 00:46:40 +0000 (0:00:00.621) 0:02:07.192 ********** 2026-03-30 00:48:41.994902 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-30 00:48:41.994914 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-30 00:48:41.994920 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-30 00:48:41.994924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-30 00:48:41.994929 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-30 00:48:41.994934 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-30 00:48:41.994939 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-30 00:48:41.994944 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-30 00:48:41.994949 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-30 00:48:41.994954 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-30 00:48:41.994959 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-30 00:48:41.994963 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-30 00:48:41.994968 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-30 00:48:41.994973 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-30 00:48:41.994978 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-30 00:48:41.994983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-30 00:48:41.994987 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-30 00:48:41.994992 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-30 00:48:41.994997 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-30 00:48:41.995002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-30 00:48:41.995006 | orchestrator | 2026-03-30 00:48:41.995011 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-30 00:48:41.995016 | orchestrator | 2026-03-30 00:48:41.995021 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-30 00:48:41.995026 | orchestrator | Monday 30 March 2026 00:46:43 +0000 (0:00:03.251) 0:02:10.443 ********** 2026-03-30 00:48:41.995030 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:41.995039 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:41.995045 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:41.995053 | orchestrator | 2026-03-30 00:48:41.995062 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-30 00:48:41.995074 | orchestrator | Monday 30 March 2026 00:46:44 +0000 (0:00:00.324) 0:02:10.767 ********** 2026-03-30 00:48:41.995082 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:41.995090 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:41.995098 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:41.995106 | orchestrator | 2026-03-30 00:48:41.995114 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-30 00:48:41.995122 | orchestrator | Monday 30 March 2026 00:46:44 +0000 (0:00:00.502) 0:02:11.270 ********** 2026-03-30 00:48:41.995130 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:41.995138 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:41.995147 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:41.995155 | orchestrator | 2026-03-30 00:48:41.995163 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-30 00:48:41.995171 | orchestrator | Monday 30 March 2026 00:46:45 +0000 (0:00:00.390) 0:02:11.660 ********** 2026-03-30 00:48:41.995177 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:48:41.995182 | orchestrator | 2026-03-30 00:48:41.995187 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-30 00:48:41.995192 | orchestrator | Monday 30 March 2026 00:46:45 +0000 (0:00:00.427) 0:02:12.088 ********** 2026-03-30 00:48:41.995203 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.995208 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.995212 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.995217 | orchestrator | 2026-03-30 00:48:41.995222 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-30 00:48:41.995227 | orchestrator | Monday 30 March 2026 00:46:45 +0000 (0:00:00.273) 0:02:12.361 ********** 2026-03-30 00:48:41.995232 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.995236 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.995241 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.995246 | orchestrator | 2026-03-30 00:48:41.995251 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-30 00:48:41.995255 | orchestrator | Monday 30 March 2026 00:46:46 +0000 (0:00:00.378) 0:02:12.739 ********** 2026-03-30 00:48:41.995260 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.995265 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.995270 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.995275 | orchestrator | 2026-03-30 00:48:41.995279 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-30 00:48:41.995284 | orchestrator | Monday 30 March 2026 00:46:46 +0000 (0:00:00.270) 0:02:13.010 ********** 2026-03-30 00:48:41.995289 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.995294 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.995298 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.995303 | orchestrator | 2026-03-30 00:48:41.995313 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-30 00:48:41.995317 | orchestrator | Monday 30 March 2026 00:46:46 +0000 (0:00:00.583) 0:02:13.593 ********** 2026-03-30 00:48:41.995337 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.995346 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.995355 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.995363 | orchestrator | 2026-03-30 00:48:41.995371 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-30 00:48:41.995377 | orchestrator | Monday 30 March 2026 00:46:48 +0000 (0:00:01.069) 0:02:14.662 ********** 2026-03-30 00:48:41.995382 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.995387 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.995391 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.995401 | orchestrator | 2026-03-30 00:48:41.995406 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-30 00:48:41.995415 | orchestrator | Monday 30 March 2026 00:46:49 +0000 (0:00:01.417) 0:02:16.080 ********** 2026-03-30 00:48:41.995426 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:48:41.995436 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:48:41.995444 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:48:41.995452 | orchestrator | 2026-03-30 00:48:41.995459 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-30 00:48:41.995466 | orchestrator | 2026-03-30 00:48:41.995474 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-30 00:48:41.995482 | orchestrator | Monday 30 March 2026 00:46:59 +0000 (0:00:09.585) 0:02:25.665 ********** 2026-03-30 00:48:41.995491 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.995499 | orchestrator | 2026-03-30 00:48:41.995507 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-30 00:48:41.995514 | orchestrator | Monday 30 March 2026 00:46:59 +0000 (0:00:00.807) 0:02:26.473 ********** 2026-03-30 00:48:41.995522 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995530 | orchestrator | 2026-03-30 00:48:41.995538 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-30 00:48:41.995546 | orchestrator | Monday 30 March 2026 00:47:00 +0000 (0:00:00.424) 0:02:26.898 ********** 2026-03-30 00:48:41.995554 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-30 00:48:41.995562 | orchestrator | 2026-03-30 00:48:41.995570 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-30 00:48:41.995578 | orchestrator | Monday 30 March 2026 00:47:00 +0000 (0:00:00.544) 0:02:27.443 ********** 2026-03-30 00:48:41.995586 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995594 | orchestrator | 2026-03-30 00:48:41.995602 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-30 00:48:41.995610 | orchestrator | Monday 30 March 2026 00:47:01 +0000 (0:00:00.907) 0:02:28.350 ********** 2026-03-30 00:48:41.995618 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995626 | orchestrator | 2026-03-30 00:48:41.995634 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-30 00:48:41.995642 | orchestrator | Monday 30 March 2026 00:47:02 +0000 (0:00:00.541) 0:02:28.892 ********** 2026-03-30 00:48:41.995650 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-30 00:48:41.995659 | orchestrator | 2026-03-30 00:48:41.995666 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-30 00:48:41.995674 | orchestrator | Monday 30 March 2026 00:47:04 +0000 (0:00:01.841) 0:02:30.734 ********** 2026-03-30 00:48:41.995682 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-30 00:48:41.995690 | orchestrator | 2026-03-30 00:48:41.995699 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-30 00:48:41.995707 | orchestrator | Monday 30 March 2026 00:47:04 +0000 (0:00:00.801) 0:02:31.535 ********** 2026-03-30 00:48:41.995716 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995723 | orchestrator | 2026-03-30 00:48:41.995732 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-30 00:48:41.995740 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.438) 0:02:31.974 ********** 2026-03-30 00:48:41.995748 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995756 | orchestrator | 2026-03-30 00:48:41.995764 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-30 00:48:41.995771 | orchestrator | 2026-03-30 00:48:41.995776 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-30 00:48:41.995781 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.430) 0:02:32.404 ********** 2026-03-30 00:48:41.995785 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.995790 | orchestrator | 2026-03-30 00:48:41.995795 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-30 00:48:41.995808 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.155) 0:02:32.559 ********** 2026-03-30 00:48:41.995813 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-30 00:48:41.995818 | orchestrator | 2026-03-30 00:48:41.995823 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-30 00:48:41.995828 | orchestrator | Monday 30 March 2026 00:47:06 +0000 (0:00:00.255) 0:02:32.815 ********** 2026-03-30 00:48:41.995833 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.995838 | orchestrator | 2026-03-30 00:48:41.995842 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-30 00:48:41.995847 | orchestrator | Monday 30 March 2026 00:47:07 +0000 (0:00:01.055) 0:02:33.870 ********** 2026-03-30 00:48:41.995852 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.995857 | orchestrator | 2026-03-30 00:48:41.995862 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-30 00:48:41.995866 | orchestrator | Monday 30 March 2026 00:47:08 +0000 (0:00:01.222) 0:02:35.093 ********** 2026-03-30 00:48:41.995871 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995876 | orchestrator | 2026-03-30 00:48:41.995881 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-30 00:48:41.995886 | orchestrator | Monday 30 March 2026 00:47:09 +0000 (0:00:00.816) 0:02:35.909 ********** 2026-03-30 00:48:41.995891 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.995895 | orchestrator | 2026-03-30 00:48:41.995905 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-30 00:48:41.995910 | orchestrator | Monday 30 March 2026 00:47:09 +0000 (0:00:00.373) 0:02:36.283 ********** 2026-03-30 00:48:41.995915 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995920 | orchestrator | 2026-03-30 00:48:41.995925 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-30 00:48:41.995930 | orchestrator | Monday 30 March 2026 00:47:16 +0000 (0:00:07.143) 0:02:43.426 ********** 2026-03-30 00:48:41.995935 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.995939 | orchestrator | 2026-03-30 00:48:41.995944 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-30 00:48:41.995949 | orchestrator | Monday 30 March 2026 00:47:29 +0000 (0:00:12.269) 0:02:55.696 ********** 2026-03-30 00:48:41.995954 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.995958 | orchestrator | 2026-03-30 00:48:41.995963 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-30 00:48:41.995968 | orchestrator | 2026-03-30 00:48:41.995973 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-30 00:48:41.995978 | orchestrator | Monday 30 March 2026 00:47:29 +0000 (0:00:00.579) 0:02:56.276 ********** 2026-03-30 00:48:41.995982 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.995987 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.995992 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.995997 | orchestrator | 2026-03-30 00:48:41.996001 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-30 00:48:41.996006 | orchestrator | Monday 30 March 2026 00:47:30 +0000 (0:00:00.396) 0:02:56.672 ********** 2026-03-30 00:48:41.996011 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996016 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.996020 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.996025 | orchestrator | 2026-03-30 00:48:41.996030 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-30 00:48:41.996035 | orchestrator | Monday 30 March 2026 00:47:30 +0000 (0:00:00.275) 0:02:56.948 ********** 2026-03-30 00:48:41.996039 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:48:41.996044 | orchestrator | 2026-03-30 00:48:41.996049 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-30 00:48:41.996054 | orchestrator | Monday 30 March 2026 00:47:30 +0000 (0:00:00.490) 0:02:57.438 ********** 2026-03-30 00:48:41.996062 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996067 | orchestrator | 2026-03-30 00:48:41.996072 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-30 00:48:41.996076 | orchestrator | Monday 30 March 2026 00:47:31 +0000 (0:00:00.793) 0:02:58.232 ********** 2026-03-30 00:48:41.996081 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996086 | orchestrator | 2026-03-30 00:48:41.996091 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-30 00:48:41.996096 | orchestrator | Monday 30 March 2026 00:47:32 +0000 (0:00:00.942) 0:02:59.175 ********** 2026-03-30 00:48:41.996101 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996105 | orchestrator | 2026-03-30 00:48:41.996110 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-30 00:48:41.996115 | orchestrator | Monday 30 March 2026 00:47:32 +0000 (0:00:00.443) 0:02:59.618 ********** 2026-03-30 00:48:41.996120 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996124 | orchestrator | 2026-03-30 00:48:41.996129 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-30 00:48:41.996134 | orchestrator | Monday 30 March 2026 00:47:33 +0000 (0:00:00.853) 0:03:00.472 ********** 2026-03-30 00:48:41.996139 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996143 | orchestrator | 2026-03-30 00:48:41.996148 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-30 00:48:41.996153 | orchestrator | Monday 30 March 2026 00:47:33 +0000 (0:00:00.111) 0:03:00.584 ********** 2026-03-30 00:48:41.996158 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996166 | orchestrator | 2026-03-30 00:48:41.996178 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-30 00:48:41.996187 | orchestrator | Monday 30 March 2026 00:47:34 +0000 (0:00:00.089) 0:03:00.673 ********** 2026-03-30 00:48:41.996195 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996203 | orchestrator | 2026-03-30 00:48:41.996210 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-30 00:48:41.996218 | orchestrator | Monday 30 March 2026 00:47:34 +0000 (0:00:00.096) 0:03:00.770 ********** 2026-03-30 00:48:41.996225 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996233 | orchestrator | 2026-03-30 00:48:41.996245 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-30 00:48:41.996254 | orchestrator | Monday 30 March 2026 00:47:34 +0000 (0:00:00.116) 0:03:00.887 ********** 2026-03-30 00:48:41.996262 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996271 | orchestrator | 2026-03-30 00:48:41.996279 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-30 00:48:41.996284 | orchestrator | Monday 30 March 2026 00:47:38 +0000 (0:00:04.723) 0:03:05.610 ********** 2026-03-30 00:48:41.996289 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-30 00:48:41.996294 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-30 00:48:41.996298 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-30 00:48:41.996303 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-30 00:48:41.996308 | orchestrator | 2026-03-30 00:48:41.996313 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-30 00:48:41.996317 | orchestrator | Monday 30 March 2026 00:48:12 +0000 (0:00:33.494) 0:03:39.105 ********** 2026-03-30 00:48:41.996344 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996350 | orchestrator | 2026-03-30 00:48:41.996359 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-30 00:48:41.996364 | orchestrator | Monday 30 March 2026 00:48:13 +0000 (0:00:01.447) 0:03:40.552 ********** 2026-03-30 00:48:41.996369 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996374 | orchestrator | 2026-03-30 00:48:41.996378 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-30 00:48:41.996390 | orchestrator | Monday 30 March 2026 00:48:15 +0000 (0:00:01.683) 0:03:42.236 ********** 2026-03-30 00:48:41.996398 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-30 00:48:41.996406 | orchestrator | 2026-03-30 00:48:41.996415 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-30 00:48:41.996423 | orchestrator | Monday 30 March 2026 00:48:16 +0000 (0:00:01.184) 0:03:43.421 ********** 2026-03-30 00:48:41.996431 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996440 | orchestrator | 2026-03-30 00:48:41.996448 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-30 00:48:41.996457 | orchestrator | Monday 30 March 2026 00:48:16 +0000 (0:00:00.132) 0:03:43.553 ********** 2026-03-30 00:48:41.996466 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-30 00:48:41.996475 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-30 00:48:41.996483 | orchestrator | 2026-03-30 00:48:41.996491 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-30 00:48:41.996497 | orchestrator | Monday 30 March 2026 00:48:18 +0000 (0:00:01.991) 0:03:45.545 ********** 2026-03-30 00:48:41.996501 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.996506 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.996511 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.996516 | orchestrator | 2026-03-30 00:48:41.996521 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-30 00:48:41.996526 | orchestrator | Monday 30 March 2026 00:48:19 +0000 (0:00:00.310) 0:03:45.855 ********** 2026-03-30 00:48:41.996530 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.996535 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.996540 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.996545 | orchestrator | 2026-03-30 00:48:41.996550 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-30 00:48:41.996555 | orchestrator | 2026-03-30 00:48:41.996559 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-30 00:48:41.996564 | orchestrator | Monday 30 March 2026 00:48:20 +0000 (0:00:00.796) 0:03:46.652 ********** 2026-03-30 00:48:41.996569 | orchestrator | ok: [testbed-manager] 2026-03-30 00:48:41.996574 | orchestrator | 2026-03-30 00:48:41.996579 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-30 00:48:41.996583 | orchestrator | Monday 30 March 2026 00:48:20 +0000 (0:00:00.114) 0:03:46.767 ********** 2026-03-30 00:48:41.996588 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-30 00:48:41.996596 | orchestrator | 2026-03-30 00:48:41.996607 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-30 00:48:41.996618 | orchestrator | Monday 30 March 2026 00:48:20 +0000 (0:00:00.273) 0:03:47.040 ********** 2026-03-30 00:48:41.996625 | orchestrator | changed: [testbed-manager] 2026-03-30 00:48:41.996633 | orchestrator | 2026-03-30 00:48:41.996641 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-30 00:48:41.996649 | orchestrator | 2026-03-30 00:48:41.996657 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-30 00:48:41.996663 | orchestrator | Monday 30 March 2026 00:48:25 +0000 (0:00:05.210) 0:03:52.251 ********** 2026-03-30 00:48:41.996670 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:48:41.996678 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:48:41.996685 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:48:41.996692 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:48:41.996700 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:48:41.996706 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:48:41.996713 | orchestrator | 2026-03-30 00:48:41.996721 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-30 00:48:41.996729 | orchestrator | Monday 30 March 2026 00:48:26 +0000 (0:00:00.676) 0:03:52.927 ********** 2026-03-30 00:48:41.996742 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-30 00:48:41.996750 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-30 00:48:41.996758 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-30 00:48:41.996769 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-30 00:48:41.996777 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-30 00:48:41.996784 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-30 00:48:41.996792 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-30 00:48:41.996800 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-30 00:48:41.996809 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-30 00:48:41.996817 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-30 00:48:41.996824 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-30 00:48:41.996832 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-30 00:48:41.996840 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-30 00:48:41.996849 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-30 00:48:41.996864 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-30 00:48:41.996870 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-30 00:48:41.996875 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-30 00:48:41.996880 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-30 00:48:41.996884 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-30 00:48:41.996889 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-30 00:48:41.996894 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-30 00:48:41.996899 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-30 00:48:41.996904 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-30 00:48:41.996909 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-30 00:48:41.996913 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-30 00:48:41.996918 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-30 00:48:41.996923 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-30 00:48:41.996929 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-30 00:48:41.996937 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-30 00:48:41.996945 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-30 00:48:41.996953 | orchestrator | 2026-03-30 00:48:41.996960 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-30 00:48:41.996967 | orchestrator | Monday 30 March 2026 00:48:38 +0000 (0:00:12.423) 0:04:05.351 ********** 2026-03-30 00:48:41.996975 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.996983 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.996991 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.996999 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.997013 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.997021 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.997028 | orchestrator | 2026-03-30 00:48:41.997033 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-30 00:48:41.997038 | orchestrator | Monday 30 March 2026 00:48:39 +0000 (0:00:00.533) 0:04:05.884 ********** 2026-03-30 00:48:41.997043 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:48:41.997048 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:48:41.997052 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:48:41.997057 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:48:41.997062 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:48:41.997067 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:48:41.997071 | orchestrator | 2026-03-30 00:48:41.997076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:48:41.997081 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:48:41.997088 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-30 00:48:41.997093 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-30 00:48:41.997097 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-30 00:48:41.997102 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-30 00:48:41.997110 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-30 00:48:41.997115 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-30 00:48:41.997120 | orchestrator | 2026-03-30 00:48:41.997125 | orchestrator | 2026-03-30 00:48:41.997129 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:48:41.997134 | orchestrator | Monday 30 March 2026 00:48:39 +0000 (0:00:00.589) 0:04:06.473 ********** 2026-03-30 00:48:41.997139 | orchestrator | =============================================================================== 2026-03-30 00:48:41.997144 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.53s 2026-03-30 00:48:41.997149 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 33.49s 2026-03-30 00:48:41.997154 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.69s 2026-03-30 00:48:41.997159 | orchestrator | Manage labels ---------------------------------------------------------- 12.42s 2026-03-30 00:48:41.997164 | orchestrator | kubectl : Install required packages ------------------------------------ 12.27s 2026-03-30 00:48:41.997172 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.59s 2026-03-30 00:48:41.997177 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.14s 2026-03-30 00:48:41.997182 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.10s 2026-03-30 00:48:41.997187 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.21s 2026-03-30 00:48:41.997191 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.72s 2026-03-30 00:48:41.997196 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.25s 2026-03-30 00:48:41.997201 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.05s 2026-03-30 00:48:41.997206 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.96s 2026-03-30 00:48:41.997214 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.70s 2026-03-30 00:48:41.997219 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.47s 2026-03-30 00:48:41.997224 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.36s 2026-03-30 00:48:41.997229 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.99s 2026-03-30 00:48:41.997234 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.95s 2026-03-30 00:48:41.997238 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.84s 2026-03-30 00:48:41.997243 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.68s 2026-03-30 00:48:41.997248 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:41.997253 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:41.997258 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task 80f8016c-8764-4161-a456-e06a3d64e8ab is in state STARTED 2026-03-30 00:48:41.997263 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:41.997268 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task 6f8e84e4-9686-4c1e-9101-3ee8cd6793a0 is in state STARTED 2026-03-30 00:48:41.997273 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:41.997277 | orchestrator | 2026-03-30 00:48:41 | INFO  | Task 2318aa0c-b1f0-4094-9a1e-3df4fb913465 is in state SUCCESS 2026-03-30 00:48:41.997282 | orchestrator | 2026-03-30 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:45.055715 | orchestrator | 2026-03-30 00:48:45 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:45.055906 | orchestrator | 2026-03-30 00:48:45 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:45.058221 | orchestrator | 2026-03-30 00:48:45 | INFO  | Task 80f8016c-8764-4161-a456-e06a3d64e8ab is in state STARTED 2026-03-30 00:48:45.058608 | orchestrator | 2026-03-30 00:48:45 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:45.059252 | orchestrator | 2026-03-30 00:48:45 | INFO  | Task 6f8e84e4-9686-4c1e-9101-3ee8cd6793a0 is in state STARTED 2026-03-30 00:48:45.062168 | orchestrator | 2026-03-30 00:48:45 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:45.062224 | orchestrator | 2026-03-30 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:48.099497 | orchestrator | 2026-03-30 00:48:48 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:48.101433 | orchestrator | 2026-03-30 00:48:48 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:48.103034 | orchestrator | 2026-03-30 00:48:48 | INFO  | Task 80f8016c-8764-4161-a456-e06a3d64e8ab is in state STARTED 2026-03-30 00:48:48.104773 | orchestrator | 2026-03-30 00:48:48 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:48.106460 | orchestrator | 2026-03-30 00:48:48 | INFO  | Task 6f8e84e4-9686-4c1e-9101-3ee8cd6793a0 is in state SUCCESS 2026-03-30 00:48:48.107016 | orchestrator | 2026-03-30 00:48:48 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:48.107054 | orchestrator | 2026-03-30 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:51.142987 | orchestrator | 2026-03-30 00:48:51 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:51.143928 | orchestrator | 2026-03-30 00:48:51 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:51.143984 | orchestrator | 2026-03-30 00:48:51 | INFO  | Task 80f8016c-8764-4161-a456-e06a3d64e8ab is in state STARTED 2026-03-30 00:48:51.147062 | orchestrator | 2026-03-30 00:48:51 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:51.147279 | orchestrator | 2026-03-30 00:48:51 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:51.147302 | orchestrator | 2026-03-30 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:54.179481 | orchestrator | 2026-03-30 00:48:54 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:54.180761 | orchestrator | 2026-03-30 00:48:54 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:54.181617 | orchestrator | 2026-03-30 00:48:54 | INFO  | Task 80f8016c-8764-4161-a456-e06a3d64e8ab is in state SUCCESS 2026-03-30 00:48:54.183657 | orchestrator | 2026-03-30 00:48:54 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:54.184648 | orchestrator | 2026-03-30 00:48:54 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:54.185003 | orchestrator | 2026-03-30 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:48:57.220425 | orchestrator | 2026-03-30 00:48:57 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:48:57.221673 | orchestrator | 2026-03-30 00:48:57 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:48:57.222787 | orchestrator | 2026-03-30 00:48:57 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:48:57.225180 | orchestrator | 2026-03-30 00:48:57 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:48:57.225237 | orchestrator | 2026-03-30 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:00.257646 | orchestrator | 2026-03-30 00:49:00 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:00.257974 | orchestrator | 2026-03-30 00:49:00 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:00.259050 | orchestrator | 2026-03-30 00:49:00 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:00.259771 | orchestrator | 2026-03-30 00:49:00 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:00.259799 | orchestrator | 2026-03-30 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:03.295266 | orchestrator | 2026-03-30 00:49:03 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:03.297864 | orchestrator | 2026-03-30 00:49:03 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:03.301214 | orchestrator | 2026-03-30 00:49:03 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:03.303842 | orchestrator | 2026-03-30 00:49:03 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:03.304158 | orchestrator | 2026-03-30 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:06.333015 | orchestrator | 2026-03-30 00:49:06 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:06.335357 | orchestrator | 2026-03-30 00:49:06 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:06.337678 | orchestrator | 2026-03-30 00:49:06 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:06.339546 | orchestrator | 2026-03-30 00:49:06 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:06.339705 | orchestrator | 2026-03-30 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:09.383560 | orchestrator | 2026-03-30 00:49:09 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:09.385273 | orchestrator | 2026-03-30 00:49:09 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:09.387915 | orchestrator | 2026-03-30 00:49:09 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:09.389867 | orchestrator | 2026-03-30 00:49:09 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:09.389919 | orchestrator | 2026-03-30 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:12.424095 | orchestrator | 2026-03-30 00:49:12 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:12.427402 | orchestrator | 2026-03-30 00:49:12 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:12.430485 | orchestrator | 2026-03-30 00:49:12 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:12.432593 | orchestrator | 2026-03-30 00:49:12 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:12.433186 | orchestrator | 2026-03-30 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:15.475019 | orchestrator | 2026-03-30 00:49:15 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:15.475831 | orchestrator | 2026-03-30 00:49:15 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:15.476747 | orchestrator | 2026-03-30 00:49:15 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:15.477659 | orchestrator | 2026-03-30 00:49:15 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:15.477750 | orchestrator | 2026-03-30 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:18.514686 | orchestrator | 2026-03-30 00:49:18 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:18.516231 | orchestrator | 2026-03-30 00:49:18 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:18.516753 | orchestrator | 2026-03-30 00:49:18 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:18.517887 | orchestrator | 2026-03-30 00:49:18 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:18.517909 | orchestrator | 2026-03-30 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:21.546707 | orchestrator | 2026-03-30 00:49:21 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:21.548380 | orchestrator | 2026-03-30 00:49:21 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:21.550301 | orchestrator | 2026-03-30 00:49:21 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:21.551644 | orchestrator | 2026-03-30 00:49:21 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:21.551689 | orchestrator | 2026-03-30 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:24.601879 | orchestrator | 2026-03-30 00:49:24 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:24.605835 | orchestrator | 2026-03-30 00:49:24 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:24.608045 | orchestrator | 2026-03-30 00:49:24 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:24.609533 | orchestrator | 2026-03-30 00:49:24 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:24.610057 | orchestrator | 2026-03-30 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:27.642383 | orchestrator | 2026-03-30 00:49:27 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:27.642958 | orchestrator | 2026-03-30 00:49:27 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:27.643700 | orchestrator | 2026-03-30 00:49:27 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:27.645233 | orchestrator | 2026-03-30 00:49:27 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:27.645294 | orchestrator | 2026-03-30 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:30.673319 | orchestrator | 2026-03-30 00:49:30 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:30.674133 | orchestrator | 2026-03-30 00:49:30 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:30.675763 | orchestrator | 2026-03-30 00:49:30 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:30.677304 | orchestrator | 2026-03-30 00:49:30 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:30.677389 | orchestrator | 2026-03-30 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:33.701867 | orchestrator | 2026-03-30 00:49:33 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:33.703387 | orchestrator | 2026-03-30 00:49:33 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:33.705373 | orchestrator | 2026-03-30 00:49:33 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:33.706822 | orchestrator | 2026-03-30 00:49:33 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:33.706869 | orchestrator | 2026-03-30 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:36.737820 | orchestrator | 2026-03-30 00:49:36 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:36.739315 | orchestrator | 2026-03-30 00:49:36 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:36.740909 | orchestrator | 2026-03-30 00:49:36 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state STARTED 2026-03-30 00:49:36.742692 | orchestrator | 2026-03-30 00:49:36 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:36.742729 | orchestrator | 2026-03-30 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:39.773502 | orchestrator | 2026-03-30 00:49:39 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:39.774169 | orchestrator | 2026-03-30 00:49:39 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:39.775382 | orchestrator | 2026-03-30 00:49:39 | INFO  | Task 7e6d865b-cf34-4814-aacd-2f8e9acc76cf is in state SUCCESS 2026-03-30 00:49:39.777076 | orchestrator | 2026-03-30 00:49:39.777146 | orchestrator | 2026-03-30 00:49:39.777154 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-30 00:49:39.777159 | orchestrator | 2026-03-30 00:49:39.777164 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-30 00:49:39.777169 | orchestrator | Monday 30 March 2026 00:48:44 +0000 (0:00:00.375) 0:00:00.375 ********** 2026-03-30 00:49:39.777174 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-30 00:49:39.777179 | orchestrator | 2026-03-30 00:49:39.777184 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-30 00:49:39.777188 | orchestrator | Monday 30 March 2026 00:48:45 +0000 (0:00:01.081) 0:00:01.457 ********** 2026-03-30 00:49:39.777193 | orchestrator | changed: [testbed-manager] 2026-03-30 00:49:39.777198 | orchestrator | 2026-03-30 00:49:39.777203 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-30 00:49:39.777207 | orchestrator | Monday 30 March 2026 00:48:46 +0000 (0:00:01.435) 0:00:02.893 ********** 2026-03-30 00:49:39.777212 | orchestrator | changed: [testbed-manager] 2026-03-30 00:49:39.777216 | orchestrator | 2026-03-30 00:49:39.777220 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:49:39.777272 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:49:39.777282 | orchestrator | 2026-03-30 00:49:39.777286 | orchestrator | 2026-03-30 00:49:39.777291 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:49:39.777295 | orchestrator | Monday 30 March 2026 00:48:47 +0000 (0:00:00.482) 0:00:03.375 ********** 2026-03-30 00:49:39.777299 | orchestrator | =============================================================================== 2026-03-30 00:49:39.777302 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.44s 2026-03-30 00:49:39.777306 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.08s 2026-03-30 00:49:39.777310 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-03-30 00:49:39.777314 | orchestrator | 2026-03-30 00:49:39.777317 | orchestrator | 2026-03-30 00:49:39.777321 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-30 00:49:39.777325 | orchestrator | 2026-03-30 00:49:39.777329 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-30 00:49:39.777332 | orchestrator | Monday 30 March 2026 00:48:43 +0000 (0:00:00.213) 0:00:00.213 ********** 2026-03-30 00:49:39.777336 | orchestrator | ok: [testbed-manager] 2026-03-30 00:49:39.777341 | orchestrator | 2026-03-30 00:49:39.777345 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-30 00:49:39.777349 | orchestrator | Monday 30 March 2026 00:48:44 +0000 (0:00:01.113) 0:00:01.327 ********** 2026-03-30 00:49:39.777352 | orchestrator | ok: [testbed-manager] 2026-03-30 00:49:39.777356 | orchestrator | 2026-03-30 00:49:39.777360 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-30 00:49:39.777364 | orchestrator | Monday 30 March 2026 00:48:45 +0000 (0:00:00.511) 0:00:01.838 ********** 2026-03-30 00:49:39.777368 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-30 00:49:39.777371 | orchestrator | 2026-03-30 00:49:39.777375 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-30 00:49:39.777379 | orchestrator | Monday 30 March 2026 00:48:46 +0000 (0:00:01.090) 0:00:02.929 ********** 2026-03-30 00:49:39.777383 | orchestrator | changed: [testbed-manager] 2026-03-30 00:49:39.777387 | orchestrator | 2026-03-30 00:49:39.777390 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-30 00:49:39.777394 | orchestrator | Monday 30 March 2026 00:48:47 +0000 (0:00:01.237) 0:00:04.166 ********** 2026-03-30 00:49:39.777398 | orchestrator | changed: [testbed-manager] 2026-03-30 00:49:39.777401 | orchestrator | 2026-03-30 00:49:39.777405 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-30 00:49:39.777409 | orchestrator | Monday 30 March 2026 00:48:48 +0000 (0:00:00.491) 0:00:04.658 ********** 2026-03-30 00:49:39.777418 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-30 00:49:39.777422 | orchestrator | 2026-03-30 00:49:39.777425 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-30 00:49:39.777429 | orchestrator | Monday 30 March 2026 00:48:49 +0000 (0:00:01.628) 0:00:06.287 ********** 2026-03-30 00:49:39.777433 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-30 00:49:39.777436 | orchestrator | 2026-03-30 00:49:39.777440 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-30 00:49:39.777444 | orchestrator | Monday 30 March 2026 00:48:50 +0000 (0:00:00.808) 0:00:07.096 ********** 2026-03-30 00:49:39.777448 | orchestrator | ok: [testbed-manager] 2026-03-30 00:49:39.777451 | orchestrator | 2026-03-30 00:49:39.777455 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-30 00:49:39.777472 | orchestrator | Monday 30 March 2026 00:48:50 +0000 (0:00:00.365) 0:00:07.461 ********** 2026-03-30 00:49:39.777478 | orchestrator | ok: [testbed-manager] 2026-03-30 00:49:39.777484 | orchestrator | 2026-03-30 00:49:39.777490 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:49:39.777496 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:49:39.777502 | orchestrator | 2026-03-30 00:49:39.777508 | orchestrator | 2026-03-30 00:49:39.777513 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:49:39.777518 | orchestrator | Monday 30 March 2026 00:48:51 +0000 (0:00:00.290) 0:00:07.752 ********** 2026-03-30 00:49:39.777524 | orchestrator | =============================================================================== 2026-03-30 00:49:39.777530 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.63s 2026-03-30 00:49:39.777536 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.24s 2026-03-30 00:49:39.777542 | orchestrator | Get home directory of operator user ------------------------------------- 1.11s 2026-03-30 00:49:39.777559 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.09s 2026-03-30 00:49:39.777565 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.81s 2026-03-30 00:49:39.777604 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2026-03-30 00:49:39.777612 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-03-30 00:49:39.777619 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2026-03-30 00:49:39.777623 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-03-30 00:49:39.777627 | orchestrator | 2026-03-30 00:49:39.777630 | orchestrator | 2026-03-30 00:49:39.777634 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-30 00:49:39.777638 | orchestrator | 2026-03-30 00:49:39.777642 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-30 00:49:39.777646 | orchestrator | Monday 30 March 2026 00:47:24 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-03-30 00:49:39.777649 | orchestrator | ok: [localhost] => { 2026-03-30 00:49:39.777654 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-30 00:49:39.777658 | orchestrator | } 2026-03-30 00:49:39.777663 | orchestrator | 2026-03-30 00:49:39.777667 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-30 00:49:39.777671 | orchestrator | Monday 30 March 2026 00:47:24 +0000 (0:00:00.056) 0:00:00.362 ********** 2026-03-30 00:49:39.777676 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-30 00:49:39.777681 | orchestrator | ...ignoring 2026-03-30 00:49:39.777685 | orchestrator | 2026-03-30 00:49:39.777689 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-30 00:49:39.777698 | orchestrator | Monday 30 March 2026 00:47:27 +0000 (0:00:02.851) 0:00:03.213 ********** 2026-03-30 00:49:39.777702 | orchestrator | skipping: [localhost] 2026-03-30 00:49:39.777706 | orchestrator | 2026-03-30 00:49:39.777709 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-30 00:49:39.777713 | orchestrator | Monday 30 March 2026 00:47:27 +0000 (0:00:00.176) 0:00:03.390 ********** 2026-03-30 00:49:39.777717 | orchestrator | ok: [localhost] 2026-03-30 00:49:39.777721 | orchestrator | 2026-03-30 00:49:39.777726 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:49:39.777732 | orchestrator | 2026-03-30 00:49:39.777738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:49:39.777745 | orchestrator | Monday 30 March 2026 00:47:28 +0000 (0:00:00.803) 0:00:04.193 ********** 2026-03-30 00:49:39.777750 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:49:39.777756 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:49:39.777763 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:49:39.777769 | orchestrator | 2026-03-30 00:49:39.777776 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:49:39.777782 | orchestrator | Monday 30 March 2026 00:47:28 +0000 (0:00:00.510) 0:00:04.704 ********** 2026-03-30 00:49:39.777788 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-30 00:49:39.777801 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-30 00:49:39.777805 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-30 00:49:39.777809 | orchestrator | 2026-03-30 00:49:39.777813 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-30 00:49:39.777817 | orchestrator | 2026-03-30 00:49:39.777820 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-30 00:49:39.777824 | orchestrator | Monday 30 March 2026 00:47:29 +0000 (0:00:00.597) 0:00:05.301 ********** 2026-03-30 00:49:39.777829 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:49:39.777833 | orchestrator | 2026-03-30 00:49:39.777837 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-30 00:49:39.777840 | orchestrator | Monday 30 March 2026 00:47:30 +0000 (0:00:01.073) 0:00:06.375 ********** 2026-03-30 00:49:39.777844 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:49:39.777848 | orchestrator | 2026-03-30 00:49:39.777852 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-30 00:49:39.777856 | orchestrator | Monday 30 March 2026 00:47:31 +0000 (0:00:01.319) 0:00:07.695 ********** 2026-03-30 00:49:39.777859 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.777863 | orchestrator | 2026-03-30 00:49:39.777867 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-30 00:49:39.777870 | orchestrator | Monday 30 March 2026 00:47:31 +0000 (0:00:00.332) 0:00:08.027 ********** 2026-03-30 00:49:39.777875 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.777881 | orchestrator | 2026-03-30 00:49:39.777892 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-30 00:49:39.777898 | orchestrator | Monday 30 March 2026 00:47:32 +0000 (0:00:00.307) 0:00:08.334 ********** 2026-03-30 00:49:39.777903 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.777910 | orchestrator | 2026-03-30 00:49:39.777918 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-30 00:49:39.777925 | orchestrator | Monday 30 March 2026 00:47:32 +0000 (0:00:00.506) 0:00:08.841 ********** 2026-03-30 00:49:39.777931 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.777936 | orchestrator | 2026-03-30 00:49:39.777942 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-30 00:49:39.777948 | orchestrator | Monday 30 March 2026 00:47:33 +0000 (0:00:00.391) 0:00:09.233 ********** 2026-03-30 00:49:39.777953 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:49:39.777966 | orchestrator | 2026-03-30 00:49:39.777972 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-30 00:49:39.777985 | orchestrator | Monday 30 March 2026 00:47:33 +0000 (0:00:00.689) 0:00:09.923 ********** 2026-03-30 00:49:39.777992 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:49:39.777998 | orchestrator | 2026-03-30 00:49:39.778004 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-30 00:49:39.778010 | orchestrator | Monday 30 March 2026 00:47:34 +0000 (0:00:00.997) 0:00:10.921 ********** 2026-03-30 00:49:39.778055 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.778062 | orchestrator | 2026-03-30 00:49:39.778069 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-30 00:49:39.778075 | orchestrator | Monday 30 March 2026 00:47:35 +0000 (0:00:00.580) 0:00:11.501 ********** 2026-03-30 00:49:39.778081 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.778088 | orchestrator | 2026-03-30 00:49:39.778095 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-30 00:49:39.778099 | orchestrator | Monday 30 March 2026 00:47:35 +0000 (0:00:00.387) 0:00:11.889 ********** 2026-03-30 00:49:39.778107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778136 | orchestrator | 2026-03-30 00:49:39.778142 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-30 00:49:39.778147 | orchestrator | Monday 30 March 2026 00:47:37 +0000 (0:00:01.824) 0:00:13.713 ********** 2026-03-30 00:49:39.778162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778183 | orchestrator | 2026-03-30 00:49:39.778189 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-30 00:49:39.778195 | orchestrator | Monday 30 March 2026 00:47:40 +0000 (0:00:02.498) 0:00:16.211 ********** 2026-03-30 00:49:39.778208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-30 00:49:39.778220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-30 00:49:39.778244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-30 00:49:39.778251 | orchestrator | 2026-03-30 00:49:39.778258 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-30 00:49:39.778264 | orchestrator | Monday 30 March 2026 00:47:43 +0000 (0:00:03.016) 0:00:19.228 ********** 2026-03-30 00:49:39.778270 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-30 00:49:39.778277 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-30 00:49:39.778283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-30 00:49:39.778289 | orchestrator | 2026-03-30 00:49:39.778296 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-30 00:49:39.778307 | orchestrator | Monday 30 March 2026 00:47:44 +0000 (0:00:01.739) 0:00:20.967 ********** 2026-03-30 00:49:39.778313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-30 00:49:39.778320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-30 00:49:39.778326 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-30 00:49:39.778332 | orchestrator | 2026-03-30 00:49:39.778339 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-30 00:49:39.778345 | orchestrator | Monday 30 March 2026 00:47:46 +0000 (0:00:01.636) 0:00:22.604 ********** 2026-03-30 00:49:39.778352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-30 00:49:39.778358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-30 00:49:39.778365 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-30 00:49:39.778371 | orchestrator | 2026-03-30 00:49:39.778379 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-30 00:49:39.778386 | orchestrator | Monday 30 March 2026 00:47:47 +0000 (0:00:01.348) 0:00:23.952 ********** 2026-03-30 00:49:39.778392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-30 00:49:39.778398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-30 00:49:39.778404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-30 00:49:39.778410 | orchestrator | 2026-03-30 00:49:39.778416 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-30 00:49:39.778423 | orchestrator | Monday 30 March 2026 00:47:49 +0000 (0:00:01.413) 0:00:25.366 ********** 2026-03-30 00:49:39.778429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-30 00:49:39.778435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-30 00:49:39.778442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-30 00:49:39.778448 | orchestrator | 2026-03-30 00:49:39.778454 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-30 00:49:39.778460 | orchestrator | Monday 30 March 2026 00:47:50 +0000 (0:00:01.746) 0:00:27.113 ********** 2026-03-30 00:49:39.778467 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.778473 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:49:39.778479 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:49:39.778486 | orchestrator | 2026-03-30 00:49:39.778493 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-30 00:49:39.778505 | orchestrator | Monday 30 March 2026 00:47:51 +0000 (0:00:00.403) 0:00:27.516 ********** 2026-03-30 00:49:39.778512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:49:39.778541 | orchestrator | 2026-03-30 00:49:39.778548 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-30 00:49:39.778554 | orchestrator | Monday 30 March 2026 00:47:52 +0000 (0:00:01.011) 0:00:28.528 ********** 2026-03-30 00:49:39.778560 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:49:39.778566 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:49:39.778571 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:49:39.778577 | orchestrator | 2026-03-30 00:49:39.778582 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-30 00:49:39.778588 | orchestrator | Monday 30 March 2026 00:47:53 +0000 (0:00:00.834) 0:00:29.362 ********** 2026-03-30 00:49:39.778599 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:49:39.778605 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:49:39.778610 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:49:39.778617 | orchestrator | 2026-03-30 00:49:39.778624 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-30 00:49:39.778630 | orchestrator | Monday 30 March 2026 00:48:04 +0000 (0:00:11.670) 0:00:41.033 ********** 2026-03-30 00:49:39.778637 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:49:39.778643 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:49:39.778649 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:49:39.778655 | orchestrator | 2026-03-30 00:49:39.778662 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-30 00:49:39.778668 | orchestrator | 2026-03-30 00:49:39.778674 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-30 00:49:39.778680 | orchestrator | Monday 30 March 2026 00:48:05 +0000 (0:00:00.355) 0:00:41.389 ********** 2026-03-30 00:49:39.778686 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:49:39.778693 | orchestrator | 2026-03-30 00:49:39.778698 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-30 00:49:39.778704 | orchestrator | Monday 30 March 2026 00:48:05 +0000 (0:00:00.559) 0:00:41.948 ********** 2026-03-30 00:49:39.778710 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:49:39.778717 | orchestrator | 2026-03-30 00:49:39.778722 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-30 00:49:39.778728 | orchestrator | Monday 30 March 2026 00:48:06 +0000 (0:00:00.259) 0:00:42.208 ********** 2026-03-30 00:49:39.778735 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:49:39.778741 | orchestrator | 2026-03-30 00:49:39.778748 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-30 00:49:39.778754 | orchestrator | Monday 30 March 2026 00:48:07 +0000 (0:00:01.674) 0:00:43.883 ********** 2026-03-30 00:49:39.778761 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:49:39.778767 | orchestrator | 2026-03-30 00:49:39.778773 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-30 00:49:39.778780 | orchestrator | 2026-03-30 00:49:39.778787 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-30 00:49:39.778793 | orchestrator | Monday 30 March 2026 00:49:01 +0000 (0:00:54.025) 0:01:37.908 ********** 2026-03-30 00:49:39.778799 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:49:39.778805 | orchestrator | 2026-03-30 00:49:39.778812 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-30 00:49:39.778823 | orchestrator | Monday 30 March 2026 00:49:02 +0000 (0:00:00.734) 0:01:38.643 ********** 2026-03-30 00:49:39.778829 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:49:39.778836 | orchestrator | 2026-03-30 00:49:39.778842 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-30 00:49:39.778848 | orchestrator | Monday 30 March 2026 00:49:02 +0000 (0:00:00.233) 0:01:38.876 ********** 2026-03-30 00:49:39.778855 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:49:39.778861 | orchestrator | 2026-03-30 00:49:39.778865 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-30 00:49:39.778869 | orchestrator | Monday 30 March 2026 00:49:04 +0000 (0:00:01.671) 0:01:40.548 ********** 2026-03-30 00:49:39.778873 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:49:39.778876 | orchestrator | 2026-03-30 00:49:39.778880 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-30 00:49:39.778884 | orchestrator | 2026-03-30 00:49:39.778888 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-30 00:49:39.778892 | orchestrator | Monday 30 March 2026 00:49:19 +0000 (0:00:15.134) 0:01:55.683 ********** 2026-03-30 00:49:39.778896 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:49:39.778900 | orchestrator | 2026-03-30 00:49:39.778908 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-30 00:49:39.778912 | orchestrator | Monday 30 March 2026 00:49:20 +0000 (0:00:00.647) 0:01:56.331 ********** 2026-03-30 00:49:39.778924 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:49:39.778928 | orchestrator | 2026-03-30 00:49:39.778932 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-30 00:49:39.778937 | orchestrator | Monday 30 March 2026 00:49:20 +0000 (0:00:00.197) 0:01:56.528 ********** 2026-03-30 00:49:39.778941 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:49:39.778945 | orchestrator | 2026-03-30 00:49:39.778949 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-30 00:49:39.778953 | orchestrator | Monday 30 March 2026 00:49:21 +0000 (0:00:01.632) 0:01:58.161 ********** 2026-03-30 00:49:39.778956 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:49:39.778960 | orchestrator | 2026-03-30 00:49:39.778964 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-30 00:49:39.778968 | orchestrator | 2026-03-30 00:49:39.778972 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-30 00:49:39.778976 | orchestrator | Monday 30 March 2026 00:49:35 +0000 (0:00:13.549) 0:02:11.710 ********** 2026-03-30 00:49:39.778980 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:49:39.778984 | orchestrator | 2026-03-30 00:49:39.778988 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-30 00:49:39.778992 | orchestrator | Monday 30 March 2026 00:49:36 +0000 (0:00:00.622) 0:02:12.333 ********** 2026-03-30 00:49:39.778995 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:49:39.778999 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:49:39.779004 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:49:39.779008 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-30 00:49:39.779012 | orchestrator | enable_outward_rabbitmq_True 2026-03-30 00:49:39.779016 | orchestrator | 2026-03-30 00:49:39.779020 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-30 00:49:39.779024 | orchestrator | skipping: no hosts matched 2026-03-30 00:49:39.779028 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-30 00:49:39.779032 | orchestrator | outward_rabbitmq_restart 2026-03-30 00:49:39.779036 | orchestrator | 2026-03-30 00:49:39.779040 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-30 00:49:39.779044 | orchestrator | skipping: no hosts matched 2026-03-30 00:49:39.779047 | orchestrator | 2026-03-30 00:49:39.779051 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-30 00:49:39.779055 | orchestrator | skipping: no hosts matched 2026-03-30 00:49:39.779059 | orchestrator | 2026-03-30 00:49:39.779063 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:49:39.779067 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-30 00:49:39.779073 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-30 00:49:39.779077 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:49:39.779081 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:49:39.779084 | orchestrator | 2026-03-30 00:49:39.779088 | orchestrator | 2026-03-30 00:49:39.779092 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:49:39.779096 | orchestrator | Monday 30 March 2026 00:49:38 +0000 (0:00:02.129) 0:02:14.462 ********** 2026-03-30 00:49:39.779099 | orchestrator | =============================================================================== 2026-03-30 00:49:39.779103 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.71s 2026-03-30 00:49:39.779108 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 11.67s 2026-03-30 00:49:39.779115 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.98s 2026-03-30 00:49:39.779119 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.02s 2026-03-30 00:49:39.779123 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.85s 2026-03-30 00:49:39.779127 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.50s 2026-03-30 00:49:39.779131 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.13s 2026-03-30 00:49:39.779138 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.94s 2026-03-30 00:49:39.779142 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.82s 2026-03-30 00:49:39.779146 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.75s 2026-03-30 00:49:39.779150 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.74s 2026-03-30 00:49:39.779154 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.64s 2026-03-30 00:49:39.779158 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.41s 2026-03-30 00:49:39.779162 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.35s 2026-03-30 00:49:39.779166 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.32s 2026-03-30 00:49:39.779170 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.07s 2026-03-30 00:49:39.779174 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.01s 2026-03-30 00:49:39.779180 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2026-03-30 00:49:39.779185 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.83s 2026-03-30 00:49:39.779189 | orchestrator | Set kolla_action_rabbitmq = kolla_action_ng ----------------------------- 0.80s 2026-03-30 00:49:39.779193 | orchestrator | 2026-03-30 00:49:39 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:39.779197 | orchestrator | 2026-03-30 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:42.800455 | orchestrator | 2026-03-30 00:49:42 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:42.800574 | orchestrator | 2026-03-30 00:49:42 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:42.801421 | orchestrator | 2026-03-30 00:49:42 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:42.801492 | orchestrator | 2026-03-30 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:45.833260 | orchestrator | 2026-03-30 00:49:45 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:45.834609 | orchestrator | 2026-03-30 00:49:45 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:45.836292 | orchestrator | 2026-03-30 00:49:45 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:45.836344 | orchestrator | 2026-03-30 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:48.867483 | orchestrator | 2026-03-30 00:49:48 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:48.868009 | orchestrator | 2026-03-30 00:49:48 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:48.869062 | orchestrator | 2026-03-30 00:49:48 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:48.869081 | orchestrator | 2026-03-30 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:51.901770 | orchestrator | 2026-03-30 00:49:51 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:51.902114 | orchestrator | 2026-03-30 00:49:51 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:51.903006 | orchestrator | 2026-03-30 00:49:51 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:51.903058 | orchestrator | 2026-03-30 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:54.955917 | orchestrator | 2026-03-30 00:49:54 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:54.958394 | orchestrator | 2026-03-30 00:49:54 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:54.961945 | orchestrator | 2026-03-30 00:49:54 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:54.962398 | orchestrator | 2026-03-30 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:49:58.000541 | orchestrator | 2026-03-30 00:49:58 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:49:58.004001 | orchestrator | 2026-03-30 00:49:58 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:49:58.005735 | orchestrator | 2026-03-30 00:49:58 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:49:58.006082 | orchestrator | 2026-03-30 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:01.043296 | orchestrator | 2026-03-30 00:50:01 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:01.045740 | orchestrator | 2026-03-30 00:50:01 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:01.050170 | orchestrator | 2026-03-30 00:50:01 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:01.050544 | orchestrator | 2026-03-30 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:04.085010 | orchestrator | 2026-03-30 00:50:04 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:04.085864 | orchestrator | 2026-03-30 00:50:04 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:04.088781 | orchestrator | 2026-03-30 00:50:04 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:04.088819 | orchestrator | 2026-03-30 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:07.131682 | orchestrator | 2026-03-30 00:50:07 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:07.133054 | orchestrator | 2026-03-30 00:50:07 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:07.133096 | orchestrator | 2026-03-30 00:50:07 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:07.133102 | orchestrator | 2026-03-30 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:10.177372 | orchestrator | 2026-03-30 00:50:10 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:10.178303 | orchestrator | 2026-03-30 00:50:10 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:10.179033 | orchestrator | 2026-03-30 00:50:10 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:10.179067 | orchestrator | 2026-03-30 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:13.217483 | orchestrator | 2026-03-30 00:50:13 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:13.219060 | orchestrator | 2026-03-30 00:50:13 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:13.221677 | orchestrator | 2026-03-30 00:50:13 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:13.222265 | orchestrator | 2026-03-30 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:16.270168 | orchestrator | 2026-03-30 00:50:16 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:16.273878 | orchestrator | 2026-03-30 00:50:16 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:16.275703 | orchestrator | 2026-03-30 00:50:16 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:16.276227 | orchestrator | 2026-03-30 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:19.347678 | orchestrator | 2026-03-30 00:50:19 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:19.348587 | orchestrator | 2026-03-30 00:50:19 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:19.351667 | orchestrator | 2026-03-30 00:50:19 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:19.351716 | orchestrator | 2026-03-30 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:22.417416 | orchestrator | 2026-03-30 00:50:22 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:22.417495 | orchestrator | 2026-03-30 00:50:22 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:22.418328 | orchestrator | 2026-03-30 00:50:22 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:22.418350 | orchestrator | 2026-03-30 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:25.456804 | orchestrator | 2026-03-30 00:50:25 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:25.458710 | orchestrator | 2026-03-30 00:50:25 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:25.460234 | orchestrator | 2026-03-30 00:50:25 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:25.460260 | orchestrator | 2026-03-30 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:28.510948 | orchestrator | 2026-03-30 00:50:28 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:28.512987 | orchestrator | 2026-03-30 00:50:28 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:28.515217 | orchestrator | 2026-03-30 00:50:28 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:28.515847 | orchestrator | 2026-03-30 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:31.564415 | orchestrator | 2026-03-30 00:50:31 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:31.566584 | orchestrator | 2026-03-30 00:50:31 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:31.568616 | orchestrator | 2026-03-30 00:50:31 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:31.568829 | orchestrator | 2026-03-30 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:34.612255 | orchestrator | 2026-03-30 00:50:34 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:34.615281 | orchestrator | 2026-03-30 00:50:34 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:34.615355 | orchestrator | 2026-03-30 00:50:34 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:34.615365 | orchestrator | 2026-03-30 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:37.641334 | orchestrator | 2026-03-30 00:50:37 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:37.642902 | orchestrator | 2026-03-30 00:50:37 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:37.644655 | orchestrator | 2026-03-30 00:50:37 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:37.644696 | orchestrator | 2026-03-30 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:40.679985 | orchestrator | 2026-03-30 00:50:40 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:40.680503 | orchestrator | 2026-03-30 00:50:40 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:40.681639 | orchestrator | 2026-03-30 00:50:40 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state STARTED 2026-03-30 00:50:40.681866 | orchestrator | 2026-03-30 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:43.730463 | orchestrator | 2026-03-30 00:50:43 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:43.731361 | orchestrator | 2026-03-30 00:50:43 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:43.735589 | orchestrator | 2026-03-30 00:50:43 | INFO  | Task 476dd1d3-46d5-43b1-956b-92b23419aa03 is in state SUCCESS 2026-03-30 00:50:43.738553 | orchestrator | 2026-03-30 00:50:43.738599 | orchestrator | 2026-03-30 00:50:43.738608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:50:43.738617 | orchestrator | 2026-03-30 00:50:43.738624 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:50:43.738631 | orchestrator | Monday 30 March 2026 00:48:16 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-03-30 00:50:43.738639 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:50:43.738646 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:50:43.738653 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:50:43.738660 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.738667 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.738674 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.738680 | orchestrator | 2026-03-30 00:50:43.738687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:50:43.738694 | orchestrator | Monday 30 March 2026 00:48:17 +0000 (0:00:00.949) 0:00:01.129 ********** 2026-03-30 00:50:43.738701 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-30 00:50:43.738709 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-30 00:50:43.738716 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-30 00:50:43.738722 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-30 00:50:43.738730 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-30 00:50:43.738737 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-30 00:50:43.738744 | orchestrator | 2026-03-30 00:50:43.738751 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-30 00:50:43.738757 | orchestrator | 2026-03-30 00:50:43.738764 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-30 00:50:43.738771 | orchestrator | Monday 30 March 2026 00:48:18 +0000 (0:00:01.690) 0:00:02.819 ********** 2026-03-30 00:50:43.738779 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:50:43.738787 | orchestrator | 2026-03-30 00:50:43.738807 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-30 00:50:43.738814 | orchestrator | Monday 30 March 2026 00:48:20 +0000 (0:00:01.382) 0:00:04.202 ********** 2026-03-30 00:50:43.738827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738872 | orchestrator | 2026-03-30 00:50:43.738888 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-30 00:50:43.738895 | orchestrator | Monday 30 March 2026 00:48:21 +0000 (0:00:01.521) 0:00:05.724 ********** 2026-03-30 00:50:43.738901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738950 | orchestrator | 2026-03-30 00:50:43.738957 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-30 00:50:43.738963 | orchestrator | Monday 30 March 2026 00:48:23 +0000 (0:00:01.447) 0:00:07.171 ********** 2026-03-30 00:50:43.738970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.738993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739024 | orchestrator | 2026-03-30 00:50:43.739031 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-30 00:50:43.739037 | orchestrator | Monday 30 March 2026 00:48:25 +0000 (0:00:01.743) 0:00:08.915 ********** 2026-03-30 00:50:43.739044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739135 | orchestrator | 2026-03-30 00:50:43.739150 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-30 00:50:43.739157 | orchestrator | Monday 30 March 2026 00:48:27 +0000 (0:00:02.419) 0:00:11.334 ********** 2026-03-30 00:50:43.739165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.739223 | orchestrator | 2026-03-30 00:50:43.739230 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-30 00:50:43.739236 | orchestrator | Monday 30 March 2026 00:48:29 +0000 (0:00:02.389) 0:00:13.724 ********** 2026-03-30 00:50:43.739244 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.739253 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:50:43.739261 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:50:43.739269 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:50:43.739276 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.739284 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.739291 | orchestrator | 2026-03-30 00:50:43.739300 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-30 00:50:43.739308 | orchestrator | Monday 30 March 2026 00:48:32 +0000 (0:00:02.865) 0:00:16.589 ********** 2026-03-30 00:50:43.739317 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-30 00:50:43.739327 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-30 00:50:43.739335 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-30 00:50:43.739342 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-30 00:50:43.739351 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-30 00:50:43.739363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-30 00:50:43.739371 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-30 00:50:43.739380 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-30 00:50:43.739394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-30 00:50:43.739402 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-30 00:50:43.739410 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-30 00:50:43.739417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-30 00:50:43.739424 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-30 00:50:43.739433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-30 00:50:43.739440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-30 00:50:43.739448 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-30 00:50:43.739455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-30 00:50:43.739463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-30 00:50:43.739471 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-30 00:50:43.739480 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-30 00:50:43.739487 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-30 00:50:43.739499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-30 00:50:43.739507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-30 00:50:43.739514 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-30 00:50:43.739521 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-30 00:50:43.739528 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-30 00:50:43.739535 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-30 00:50:43.739542 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-30 00:50:43.739549 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-30 00:50:43.739556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-30 00:50:43.739564 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-30 00:50:43.739571 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-30 00:50:43.739578 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-30 00:50:43.739585 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-30 00:50:43.739592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-30 00:50:43.739603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-30 00:50:43.739610 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-30 00:50:43.739617 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-30 00:50:43.739624 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-30 00:50:43.739631 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-30 00:50:43.739638 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-30 00:50:43.739645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-30 00:50:43.739652 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-30 00:50:43.739660 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-30 00:50:43.739671 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-30 00:50:43.739678 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-30 00:50:43.739686 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-30 00:50:43.739692 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-30 00:50:43.739699 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-30 00:50:43.739706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-30 00:50:43.739713 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-30 00:50:43.739721 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-30 00:50:43.739728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-30 00:50:43.739735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-30 00:50:43.739742 | orchestrator | 2026-03-30 00:50:43.739749 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-30 00:50:43.739756 | orchestrator | Monday 30 March 2026 00:48:52 +0000 (0:00:20.190) 0:00:36.780 ********** 2026-03-30 00:50:43.739763 | orchestrator | 2026-03-30 00:50:43.739770 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-30 00:50:43.739777 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.126) 0:00:36.907 ********** 2026-03-30 00:50:43.739783 | orchestrator | 2026-03-30 00:50:43.739792 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-30 00:50:43.739799 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.135) 0:00:37.042 ********** 2026-03-30 00:50:43.739806 | orchestrator | 2026-03-30 00:50:43.739813 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-30 00:50:43.739820 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.070) 0:00:37.113 ********** 2026-03-30 00:50:43.739826 | orchestrator | 2026-03-30 00:50:43.739837 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-30 00:50:43.739845 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.075) 0:00:37.189 ********** 2026-03-30 00:50:43.739852 | orchestrator | 2026-03-30 00:50:43.739858 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-30 00:50:43.739865 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.059) 0:00:37.248 ********** 2026-03-30 00:50:43.739872 | orchestrator | 2026-03-30 00:50:43.739879 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-30 00:50:43.739886 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.059) 0:00:37.307 ********** 2026-03-30 00:50:43.739893 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:50:43.739900 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:50:43.739907 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:50:43.739914 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.739921 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.739928 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.739935 | orchestrator | 2026-03-30 00:50:43.739942 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-30 00:50:43.739949 | orchestrator | Monday 30 March 2026 00:48:55 +0000 (0:00:01.851) 0:00:39.159 ********** 2026-03-30 00:50:43.739956 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.739964 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:50:43.739971 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:50:43.739978 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.739985 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.739992 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:50:43.739999 | orchestrator | 2026-03-30 00:50:43.740006 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-30 00:50:43.740013 | orchestrator | 2026-03-30 00:50:43.740020 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-30 00:50:43.740027 | orchestrator | Monday 30 March 2026 00:49:24 +0000 (0:00:29.692) 0:01:08.851 ********** 2026-03-30 00:50:43.740034 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:50:43.740041 | orchestrator | 2026-03-30 00:50:43.740048 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-30 00:50:43.740055 | orchestrator | Monday 30 March 2026 00:49:25 +0000 (0:00:00.470) 0:01:09.321 ********** 2026-03-30 00:50:43.740061 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:50:43.740068 | orchestrator | 2026-03-30 00:50:43.740092 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-30 00:50:43.740099 | orchestrator | Monday 30 March 2026 00:49:26 +0000 (0:00:00.642) 0:01:09.964 ********** 2026-03-30 00:50:43.740105 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740111 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740118 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740124 | orchestrator | 2026-03-30 00:50:43.740131 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-30 00:50:43.740138 | orchestrator | Monday 30 March 2026 00:49:26 +0000 (0:00:00.857) 0:01:10.821 ********** 2026-03-30 00:50:43.740145 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740152 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740159 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740170 | orchestrator | 2026-03-30 00:50:43.740177 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-30 00:50:43.740184 | orchestrator | Monday 30 March 2026 00:49:27 +0000 (0:00:00.327) 0:01:11.149 ********** 2026-03-30 00:50:43.740191 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740197 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740204 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740211 | orchestrator | 2026-03-30 00:50:43.740218 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-30 00:50:43.740229 | orchestrator | Monday 30 March 2026 00:49:27 +0000 (0:00:00.443) 0:01:11.592 ********** 2026-03-30 00:50:43.740235 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740242 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740249 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740255 | orchestrator | 2026-03-30 00:50:43.740262 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-30 00:50:43.740269 | orchestrator | Monday 30 March 2026 00:49:28 +0000 (0:00:00.300) 0:01:11.893 ********** 2026-03-30 00:50:43.740276 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740283 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740290 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740297 | orchestrator | 2026-03-30 00:50:43.740304 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-30 00:50:43.740311 | orchestrator | Monday 30 March 2026 00:49:28 +0000 (0:00:00.307) 0:01:12.200 ********** 2026-03-30 00:50:43.740318 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740325 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740331 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740338 | orchestrator | 2026-03-30 00:50:43.740345 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-30 00:50:43.740352 | orchestrator | Monday 30 March 2026 00:49:28 +0000 (0:00:00.260) 0:01:12.460 ********** 2026-03-30 00:50:43.740359 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740366 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740373 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740380 | orchestrator | 2026-03-30 00:50:43.740386 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-30 00:50:43.740393 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:00.486) 0:01:12.946 ********** 2026-03-30 00:50:43.740400 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740407 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740416 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740423 | orchestrator | 2026-03-30 00:50:43.740429 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-30 00:50:43.740436 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:00.255) 0:01:13.202 ********** 2026-03-30 00:50:43.740442 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740449 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740456 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740463 | orchestrator | 2026-03-30 00:50:43.740470 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-30 00:50:43.740477 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:00.293) 0:01:13.496 ********** 2026-03-30 00:50:43.740484 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740491 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740498 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740504 | orchestrator | 2026-03-30 00:50:43.740511 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-30 00:50:43.740518 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:00.249) 0:01:13.746 ********** 2026-03-30 00:50:43.740525 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740532 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740539 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740547 | orchestrator | 2026-03-30 00:50:43.740554 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-30 00:50:43.740561 | orchestrator | Monday 30 March 2026 00:49:30 +0000 (0:00:00.245) 0:01:13.991 ********** 2026-03-30 00:50:43.740568 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740574 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740581 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740588 | orchestrator | 2026-03-30 00:50:43.740595 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-30 00:50:43.740602 | orchestrator | Monday 30 March 2026 00:49:30 +0000 (0:00:00.465) 0:01:14.456 ********** 2026-03-30 00:50:43.740614 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740621 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740628 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740635 | orchestrator | 2026-03-30 00:50:43.740642 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-30 00:50:43.740649 | orchestrator | Monday 30 March 2026 00:49:30 +0000 (0:00:00.277) 0:01:14.734 ********** 2026-03-30 00:50:43.740655 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740662 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740668 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740675 | orchestrator | 2026-03-30 00:50:43.740681 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-30 00:50:43.740687 | orchestrator | Monday 30 March 2026 00:49:31 +0000 (0:00:00.262) 0:01:14.997 ********** 2026-03-30 00:50:43.740694 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740701 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740707 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740713 | orchestrator | 2026-03-30 00:50:43.740719 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-30 00:50:43.740726 | orchestrator | Monday 30 March 2026 00:49:31 +0000 (0:00:00.312) 0:01:15.309 ********** 2026-03-30 00:50:43.740732 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740738 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740745 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740751 | orchestrator | 2026-03-30 00:50:43.740758 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-30 00:50:43.740764 | orchestrator | Monday 30 March 2026 00:49:31 +0000 (0:00:00.406) 0:01:15.716 ********** 2026-03-30 00:50:43.740771 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740778 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740789 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740795 | orchestrator | 2026-03-30 00:50:43.740802 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-30 00:50:43.740808 | orchestrator | Monday 30 March 2026 00:49:32 +0000 (0:00:00.285) 0:01:16.002 ********** 2026-03-30 00:50:43.740815 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:50:43.740821 | orchestrator | 2026-03-30 00:50:43.740828 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-30 00:50:43.740834 | orchestrator | Monday 30 March 2026 00:49:32 +0000 (0:00:00.555) 0:01:16.557 ********** 2026-03-30 00:50:43.740839 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740845 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740852 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740859 | orchestrator | 2026-03-30 00:50:43.740866 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-30 00:50:43.740872 | orchestrator | Monday 30 March 2026 00:49:33 +0000 (0:00:00.742) 0:01:17.299 ********** 2026-03-30 00:50:43.740879 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.740886 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.740892 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.740899 | orchestrator | 2026-03-30 00:50:43.740905 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-30 00:50:43.740911 | orchestrator | Monday 30 March 2026 00:49:33 +0000 (0:00:00.405) 0:01:17.705 ********** 2026-03-30 00:50:43.740918 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740924 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740930 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740936 | orchestrator | 2026-03-30 00:50:43.740942 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-30 00:50:43.740949 | orchestrator | Monday 30 March 2026 00:49:34 +0000 (0:00:00.280) 0:01:17.985 ********** 2026-03-30 00:50:43.740956 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.740963 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.740975 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.740982 | orchestrator | 2026-03-30 00:50:43.740988 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-30 00:50:43.740995 | orchestrator | Monday 30 March 2026 00:49:34 +0000 (0:00:00.354) 0:01:18.340 ********** 2026-03-30 00:50:43.741001 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.741011 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.741017 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.741024 | orchestrator | 2026-03-30 00:50:43.741030 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-30 00:50:43.741037 | orchestrator | Monday 30 March 2026 00:49:34 +0000 (0:00:00.423) 0:01:18.764 ********** 2026-03-30 00:50:43.741043 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.741050 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.741056 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.741063 | orchestrator | 2026-03-30 00:50:43.741069 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-30 00:50:43.741144 | orchestrator | Monday 30 March 2026 00:49:35 +0000 (0:00:00.315) 0:01:19.079 ********** 2026-03-30 00:50:43.741152 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.741159 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.741165 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.741172 | orchestrator | 2026-03-30 00:50:43.741179 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-30 00:50:43.741185 | orchestrator | Monday 30 March 2026 00:49:35 +0000 (0:00:00.274) 0:01:19.354 ********** 2026-03-30 00:50:43.741192 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.741199 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.741205 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.741211 | orchestrator | 2026-03-30 00:50:43.741217 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-30 00:50:43.741223 | orchestrator | Monday 30 March 2026 00:49:35 +0000 (0:00:00.263) 0:01:19.618 ********** 2026-03-30 00:50:43.741231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741317 | orchestrator | 2026-03-30 00:50:43.741324 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-30 00:50:43.741331 | orchestrator | Monday 30 March 2026 00:49:37 +0000 (0:00:01.457) 0:01:21.075 ********** 2026-03-30 00:50:43.741339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741414 | orchestrator | 2026-03-30 00:50:43.741421 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-30 00:50:43.741428 | orchestrator | Monday 30 March 2026 00:49:40 +0000 (0:00:03.629) 0:01:24.705 ********** 2026-03-30 00:50:43.741435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.741508 | orchestrator | 2026-03-30 00:50:43.741514 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-30 00:50:43.741521 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:02.581) 0:01:27.287 ********** 2026-03-30 00:50:43.741528 | orchestrator | 2026-03-30 00:50:43.741535 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-30 00:50:43.741544 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:00.061) 0:01:27.348 ********** 2026-03-30 00:50:43.741551 | orchestrator | 2026-03-30 00:50:43.741558 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-30 00:50:43.741565 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:00.061) 0:01:27.409 ********** 2026-03-30 00:50:43.741572 | orchestrator | 2026-03-30 00:50:43.741579 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-30 00:50:43.741585 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:00.063) 0:01:27.473 ********** 2026-03-30 00:50:43.741592 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.741599 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.741606 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.741613 | orchestrator | 2026-03-30 00:50:43.741620 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-30 00:50:43.741626 | orchestrator | Monday 30 March 2026 00:49:51 +0000 (0:00:07.459) 0:01:34.933 ********** 2026-03-30 00:50:43.741631 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.741637 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.741644 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.741651 | orchestrator | 2026-03-30 00:50:43.741658 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-30 00:50:43.741664 | orchestrator | Monday 30 March 2026 00:49:59 +0000 (0:00:08.597) 0:01:43.531 ********** 2026-03-30 00:50:43.741672 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.741679 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.741685 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.741692 | orchestrator | 2026-03-30 00:50:43.741699 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-30 00:50:43.741706 | orchestrator | Monday 30 March 2026 00:50:02 +0000 (0:00:03.097) 0:01:46.628 ********** 2026-03-30 00:50:43.741719 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.741726 | orchestrator | 2026-03-30 00:50:43.741733 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-30 00:50:43.741740 | orchestrator | Monday 30 March 2026 00:50:02 +0000 (0:00:00.230) 0:01:46.859 ********** 2026-03-30 00:50:43.741747 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.741754 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.741761 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.741767 | orchestrator | 2026-03-30 00:50:43.741774 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-30 00:50:43.741780 | orchestrator | Monday 30 March 2026 00:50:03 +0000 (0:00:00.761) 0:01:47.620 ********** 2026-03-30 00:50:43.741786 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.741792 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.741798 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.741805 | orchestrator | 2026-03-30 00:50:43.741811 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-30 00:50:43.741818 | orchestrator | Monday 30 March 2026 00:50:04 +0000 (0:00:00.570) 0:01:48.191 ********** 2026-03-30 00:50:43.741824 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.741830 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.741836 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.741842 | orchestrator | 2026-03-30 00:50:43.741849 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-30 00:50:43.741855 | orchestrator | Monday 30 March 2026 00:50:05 +0000 (0:00:01.062) 0:01:49.253 ********** 2026-03-30 00:50:43.741861 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.741868 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.741874 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.741880 | orchestrator | 2026-03-30 00:50:43.741887 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-30 00:50:43.741892 | orchestrator | Monday 30 March 2026 00:50:06 +0000 (0:00:00.777) 0:01:50.031 ********** 2026-03-30 00:50:43.741898 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.741905 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.741917 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.741923 | orchestrator | 2026-03-30 00:50:43.741930 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-30 00:50:43.741936 | orchestrator | Monday 30 March 2026 00:50:07 +0000 (0:00:01.106) 0:01:51.138 ********** 2026-03-30 00:50:43.741943 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.741949 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.741956 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.741962 | orchestrator | 2026-03-30 00:50:43.741969 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-30 00:50:43.741975 | orchestrator | Monday 30 March 2026 00:50:08 +0000 (0:00:00.762) 0:01:51.901 ********** 2026-03-30 00:50:43.741981 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.741988 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.741994 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.742000 | orchestrator | 2026-03-30 00:50:43.742007 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-30 00:50:43.742053 | orchestrator | Monday 30 March 2026 00:50:08 +0000 (0:00:00.487) 0:01:52.389 ********** 2026-03-30 00:50:43.742061 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742070 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742118 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742138 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742152 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742159 | orchestrator | 2026-03-30 00:50:43.742166 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-30 00:50:43.742173 | orchestrator | Monday 30 March 2026 00:50:10 +0000 (0:00:01.496) 0:01:53.885 ********** 2026-03-30 00:50:43.742180 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742209 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742230 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742252 | orchestrator | 2026-03-30 00:50:43.742259 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-30 00:50:43.742265 | orchestrator | Monday 30 March 2026 00:50:13 +0000 (0:00:03.880) 0:01:57.766 ********** 2026-03-30 00:50:43.742277 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742291 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742312 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742333 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 00:50:43.742345 | orchestrator | 2026-03-30 00:50:43.742405 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-30 00:50:43.742413 | orchestrator | Monday 30 March 2026 00:50:16 +0000 (0:00:02.696) 0:02:00.463 ********** 2026-03-30 00:50:43.742419 | orchestrator | 2026-03-30 00:50:43.742426 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-30 00:50:43.742432 | orchestrator | Monday 30 March 2026 00:50:16 +0000 (0:00:00.127) 0:02:00.590 ********** 2026-03-30 00:50:43.742438 | orchestrator | 2026-03-30 00:50:43.742445 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-30 00:50:43.742451 | orchestrator | Monday 30 March 2026 00:50:16 +0000 (0:00:00.264) 0:02:00.855 ********** 2026-03-30 00:50:43.742458 | orchestrator | 2026-03-30 00:50:43.742464 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-30 00:50:43.742471 | orchestrator | Monday 30 March 2026 00:50:17 +0000 (0:00:00.081) 0:02:00.937 ********** 2026-03-30 00:50:43.742477 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.742484 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.742491 | orchestrator | 2026-03-30 00:50:43.742502 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-30 00:50:43.742514 | orchestrator | Monday 30 March 2026 00:50:23 +0000 (0:00:06.189) 0:02:07.127 ********** 2026-03-30 00:50:43.742520 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.742527 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.742533 | orchestrator | 2026-03-30 00:50:43.742540 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-30 00:50:43.742546 | orchestrator | Monday 30 March 2026 00:50:29 +0000 (0:00:06.048) 0:02:13.175 ********** 2026-03-30 00:50:43.742552 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:50:43.742558 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:50:43.742564 | orchestrator | 2026-03-30 00:50:43.742581 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-30 00:50:43.742588 | orchestrator | Monday 30 March 2026 00:50:35 +0000 (0:00:06.362) 0:02:19.538 ********** 2026-03-30 00:50:43.742594 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:50:43.742599 | orchestrator | 2026-03-30 00:50:43.742606 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-30 00:50:43.742612 | orchestrator | Monday 30 March 2026 00:50:35 +0000 (0:00:00.109) 0:02:19.648 ********** 2026-03-30 00:50:43.742618 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.742625 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.742631 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.742638 | orchestrator | 2026-03-30 00:50:43.742644 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-30 00:50:43.742651 | orchestrator | Monday 30 March 2026 00:50:36 +0000 (0:00:00.694) 0:02:20.342 ********** 2026-03-30 00:50:43.742657 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.742663 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.742670 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.742677 | orchestrator | 2026-03-30 00:50:43.742684 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-30 00:50:43.742690 | orchestrator | Monday 30 March 2026 00:50:37 +0000 (0:00:00.695) 0:02:21.037 ********** 2026-03-30 00:50:43.742696 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.742703 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.742709 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.742716 | orchestrator | 2026-03-30 00:50:43.742723 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-30 00:50:43.742733 | orchestrator | Monday 30 March 2026 00:50:37 +0000 (0:00:00.720) 0:02:21.758 ********** 2026-03-30 00:50:43.742740 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:50:43.742747 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:50:43.742754 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:50:43.742761 | orchestrator | 2026-03-30 00:50:43.742767 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-30 00:50:43.742774 | orchestrator | Monday 30 March 2026 00:50:38 +0000 (0:00:00.520) 0:02:22.279 ********** 2026-03-30 00:50:43.742781 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.742787 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.742794 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.742801 | orchestrator | 2026-03-30 00:50:43.742808 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-30 00:50:43.742814 | orchestrator | Monday 30 March 2026 00:50:39 +0000 (0:00:00.748) 0:02:23.027 ********** 2026-03-30 00:50:43.742821 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:50:43.742827 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:50:43.742834 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:50:43.742841 | orchestrator | 2026-03-30 00:50:43.742848 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:50:43.742856 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-30 00:50:43.742864 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-30 00:50:43.742876 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-30 00:50:43.742884 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:50:43.742891 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:50:43.742899 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:50:43.742906 | orchestrator | 2026-03-30 00:50:43.742913 | orchestrator | 2026-03-30 00:50:43.742920 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:50:43.742927 | orchestrator | Monday 30 March 2026 00:50:40 +0000 (0:00:01.460) 0:02:24.488 ********** 2026-03-30 00:50:43.742934 | orchestrator | =============================================================================== 2026-03-30 00:50:43.742942 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.69s 2026-03-30 00:50:43.742949 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.19s 2026-03-30 00:50:43.742955 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.65s 2026-03-30 00:50:43.742962 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.65s 2026-03-30 00:50:43.742977 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.46s 2026-03-30 00:50:43.742985 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-03-30 00:50:43.742992 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.63s 2026-03-30 00:50:43.743005 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.87s 2026-03-30 00:50:43.743012 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.70s 2026-03-30 00:50:43.743019 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.58s 2026-03-30 00:50:43.743033 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.42s 2026-03-30 00:50:43.743041 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.39s 2026-03-30 00:50:43.743047 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.85s 2026-03-30 00:50:43.743054 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.74s 2026-03-30 00:50:43.743061 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.69s 2026-03-30 00:50:43.743068 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.52s 2026-03-30 00:50:43.743089 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2026-03-30 00:50:43.743095 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.46s 2026-03-30 00:50:43.743101 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2026-03-30 00:50:43.743108 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.45s 2026-03-30 00:50:43.743115 | orchestrator | 2026-03-30 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:46.772503 | orchestrator | 2026-03-30 00:50:46 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:46.773095 | orchestrator | 2026-03-30 00:50:46 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:46.773176 | orchestrator | 2026-03-30 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:49.807896 | orchestrator | 2026-03-30 00:50:49 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:49.809605 | orchestrator | 2026-03-30 00:50:49 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:49.809670 | orchestrator | 2026-03-30 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:52.846204 | orchestrator | 2026-03-30 00:50:52 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:52.846946 | orchestrator | 2026-03-30 00:50:52 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:52.846991 | orchestrator | 2026-03-30 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:55.893213 | orchestrator | 2026-03-30 00:50:55 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:55.894656 | orchestrator | 2026-03-30 00:50:55 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:55.894830 | orchestrator | 2026-03-30 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:50:58.929105 | orchestrator | 2026-03-30 00:50:58 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:50:58.930437 | orchestrator | 2026-03-30 00:50:58 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:50:58.930812 | orchestrator | 2026-03-30 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:01.960786 | orchestrator | 2026-03-30 00:51:01 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:01.962173 | orchestrator | 2026-03-30 00:51:01 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:01.962236 | orchestrator | 2026-03-30 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:04.986558 | orchestrator | 2026-03-30 00:51:04 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:04.987398 | orchestrator | 2026-03-30 00:51:04 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:04.987444 | orchestrator | 2026-03-30 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:08.048209 | orchestrator | 2026-03-30 00:51:08 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:08.048285 | orchestrator | 2026-03-30 00:51:08 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:08.048292 | orchestrator | 2026-03-30 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:11.082698 | orchestrator | 2026-03-30 00:51:11 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:11.084360 | orchestrator | 2026-03-30 00:51:11 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:11.084421 | orchestrator | 2026-03-30 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:14.119523 | orchestrator | 2026-03-30 00:51:14 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:14.121588 | orchestrator | 2026-03-30 00:51:14 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:14.121657 | orchestrator | 2026-03-30 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:17.160256 | orchestrator | 2026-03-30 00:51:17 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:17.161914 | orchestrator | 2026-03-30 00:51:17 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:17.161982 | orchestrator | 2026-03-30 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:20.204421 | orchestrator | 2026-03-30 00:51:20 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:20.204518 | orchestrator | 2026-03-30 00:51:20 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:20.204526 | orchestrator | 2026-03-30 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:23.233909 | orchestrator | 2026-03-30 00:51:23 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:23.234271 | orchestrator | 2026-03-30 00:51:23 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:23.234298 | orchestrator | 2026-03-30 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:26.267029 | orchestrator | 2026-03-30 00:51:26 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:26.268939 | orchestrator | 2026-03-30 00:51:26 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:26.269059 | orchestrator | 2026-03-30 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:29.305888 | orchestrator | 2026-03-30 00:51:29 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:29.307097 | orchestrator | 2026-03-30 00:51:29 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:29.307159 | orchestrator | 2026-03-30 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:32.336242 | orchestrator | 2026-03-30 00:51:32 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:32.336561 | orchestrator | 2026-03-30 00:51:32 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:32.336587 | orchestrator | 2026-03-30 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:35.378254 | orchestrator | 2026-03-30 00:51:35 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:35.379394 | orchestrator | 2026-03-30 00:51:35 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:35.379565 | orchestrator | 2026-03-30 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:38.424349 | orchestrator | 2026-03-30 00:51:38 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:38.427786 | orchestrator | 2026-03-30 00:51:38 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:38.427881 | orchestrator | 2026-03-30 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:41.458420 | orchestrator | 2026-03-30 00:51:41 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:41.460282 | orchestrator | 2026-03-30 00:51:41 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:41.460343 | orchestrator | 2026-03-30 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:44.491707 | orchestrator | 2026-03-30 00:51:44 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:44.493137 | orchestrator | 2026-03-30 00:51:44 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:44.493265 | orchestrator | 2026-03-30 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:47.535357 | orchestrator | 2026-03-30 00:51:47 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:47.537813 | orchestrator | 2026-03-30 00:51:47 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:47.537934 | orchestrator | 2026-03-30 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:50.574485 | orchestrator | 2026-03-30 00:51:50 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:50.575508 | orchestrator | 2026-03-30 00:51:50 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:50.575550 | orchestrator | 2026-03-30 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:53.614734 | orchestrator | 2026-03-30 00:51:53 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:53.616342 | orchestrator | 2026-03-30 00:51:53 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:53.616671 | orchestrator | 2026-03-30 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:56.653272 | orchestrator | 2026-03-30 00:51:56 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:56.655103 | orchestrator | 2026-03-30 00:51:56 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:56.655121 | orchestrator | 2026-03-30 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:51:59.692470 | orchestrator | 2026-03-30 00:51:59 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:51:59.693693 | orchestrator | 2026-03-30 00:51:59 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:51:59.694139 | orchestrator | 2026-03-30 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:02.739190 | orchestrator | 2026-03-30 00:52:02 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:02.742440 | orchestrator | 2026-03-30 00:52:02 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:02.742973 | orchestrator | 2026-03-30 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:05.817196 | orchestrator | 2026-03-30 00:52:05 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:05.818004 | orchestrator | 2026-03-30 00:52:05 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:05.818100 | orchestrator | 2026-03-30 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:08.862618 | orchestrator | 2026-03-30 00:52:08 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:08.863844 | orchestrator | 2026-03-30 00:52:08 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:08.863967 | orchestrator | 2026-03-30 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:11.905550 | orchestrator | 2026-03-30 00:52:11 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:11.905727 | orchestrator | 2026-03-30 00:52:11 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:11.905742 | orchestrator | 2026-03-30 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:14.944298 | orchestrator | 2026-03-30 00:52:14 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:14.945292 | orchestrator | 2026-03-30 00:52:14 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:14.945321 | orchestrator | 2026-03-30 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:17.987654 | orchestrator | 2026-03-30 00:52:17 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:17.988439 | orchestrator | 2026-03-30 00:52:17 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:17.988494 | orchestrator | 2026-03-30 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:21.019404 | orchestrator | 2026-03-30 00:52:21 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:21.020867 | orchestrator | 2026-03-30 00:52:21 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:21.021128 | orchestrator | 2026-03-30 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:24.058009 | orchestrator | 2026-03-30 00:52:24 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:24.059381 | orchestrator | 2026-03-30 00:52:24 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:24.059420 | orchestrator | 2026-03-30 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:27.096832 | orchestrator | 2026-03-30 00:52:27 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:27.098694 | orchestrator | 2026-03-30 00:52:27 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:27.098767 | orchestrator | 2026-03-30 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:30.133801 | orchestrator | 2026-03-30 00:52:30 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:30.134574 | orchestrator | 2026-03-30 00:52:30 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:30.134619 | orchestrator | 2026-03-30 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:33.162727 | orchestrator | 2026-03-30 00:52:33 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:33.169374 | orchestrator | 2026-03-30 00:52:33 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:33.169439 | orchestrator | 2026-03-30 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:36.210404 | orchestrator | 2026-03-30 00:52:36 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:36.210495 | orchestrator | 2026-03-30 00:52:36 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:36.210506 | orchestrator | 2026-03-30 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:39.242818 | orchestrator | 2026-03-30 00:52:39 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:39.243360 | orchestrator | 2026-03-30 00:52:39 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:39.243394 | orchestrator | 2026-03-30 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:42.299373 | orchestrator | 2026-03-30 00:52:42 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:42.301104 | orchestrator | 2026-03-30 00:52:42 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:42.301214 | orchestrator | 2026-03-30 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:45.344207 | orchestrator | 2026-03-30 00:52:45 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:45.345194 | orchestrator | 2026-03-30 00:52:45 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:45.345235 | orchestrator | 2026-03-30 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:48.378746 | orchestrator | 2026-03-30 00:52:48 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:48.380028 | orchestrator | 2026-03-30 00:52:48 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:48.380044 | orchestrator | 2026-03-30 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:51.415204 | orchestrator | 2026-03-30 00:52:51 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:51.416732 | orchestrator | 2026-03-30 00:52:51 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:51.416795 | orchestrator | 2026-03-30 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:54.453347 | orchestrator | 2026-03-30 00:52:54 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:54.453551 | orchestrator | 2026-03-30 00:52:54 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:54.453616 | orchestrator | 2026-03-30 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:52:57.489737 | orchestrator | 2026-03-30 00:52:57 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:52:57.490664 | orchestrator | 2026-03-30 00:52:57 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:52:57.491921 | orchestrator | 2026-03-30 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:00.531868 | orchestrator | 2026-03-30 00:53:00 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:53:00.534193 | orchestrator | 2026-03-30 00:53:00 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:00.534245 | orchestrator | 2026-03-30 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:03.573329 | orchestrator | 2026-03-30 00:53:03 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state STARTED 2026-03-30 00:53:03.574571 | orchestrator | 2026-03-30 00:53:03 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:03.574767 | orchestrator | 2026-03-30 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:06.614167 | orchestrator | 2026-03-30 00:53:06 | INFO  | Task f65b1065-43e6-4b96-a480-5a7ceebb021c is in state SUCCESS 2026-03-30 00:53:06.619580 | orchestrator | 2026-03-30 00:53:06.619668 | orchestrator | 2026-03-30 00:53:06.619680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:53:06.619699 | orchestrator | 2026-03-30 00:53:06.619713 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:53:06.619721 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.451) 0:00:00.451 ********** 2026-03-30 00:53:06.619728 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.619735 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.619742 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.619749 | orchestrator | 2026-03-30 00:53:06.619756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:53:06.619762 | orchestrator | Monday 30 March 2026 00:47:05 +0000 (0:00:00.596) 0:00:01.048 ********** 2026-03-30 00:53:06.619770 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-30 00:53:06.619776 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-30 00:53:06.619783 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-30 00:53:06.619805 | orchestrator | 2026-03-30 00:53:06.619812 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-30 00:53:06.619818 | orchestrator | 2026-03-30 00:53:06.619825 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-30 00:53:06.619832 | orchestrator | Monday 30 March 2026 00:47:06 +0000 (0:00:00.562) 0:00:01.611 ********** 2026-03-30 00:53:06.619862 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.619870 | orchestrator | 2026-03-30 00:53:06.619876 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-30 00:53:06.619883 | orchestrator | Monday 30 March 2026 00:47:07 +0000 (0:00:01.252) 0:00:02.864 ********** 2026-03-30 00:53:06.619889 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.619895 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.619915 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.619922 | orchestrator | 2026-03-30 00:53:06.619928 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-30 00:53:06.619935 | orchestrator | Monday 30 March 2026 00:47:09 +0000 (0:00:02.034) 0:00:04.898 ********** 2026-03-30 00:53:06.619941 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.619948 | orchestrator | 2026-03-30 00:53:06.619954 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-30 00:53:06.619960 | orchestrator | Monday 30 March 2026 00:47:10 +0000 (0:00:00.711) 0:00:05.610 ********** 2026-03-30 00:53:06.619967 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.619974 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.619981 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.619987 | orchestrator | 2026-03-30 00:53:06.620003 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-30 00:53:06.620017 | orchestrator | Monday 30 March 2026 00:47:12 +0000 (0:00:01.854) 0:00:07.464 ********** 2026-03-30 00:53:06.620031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-30 00:53:06.620038 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-30 00:53:06.620044 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-30 00:53:06.620051 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-30 00:53:06.620057 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-30 00:53:06.620064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-30 00:53:06.620082 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-30 00:53:06.620090 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-30 00:53:06.620103 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-30 00:53:06.620110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-30 00:53:06.620117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-30 00:53:06.620123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-30 00:53:06.620130 | orchestrator | 2026-03-30 00:53:06.620137 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-30 00:53:06.620144 | orchestrator | Monday 30 March 2026 00:47:14 +0000 (0:00:02.337) 0:00:09.802 ********** 2026-03-30 00:53:06.620152 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-30 00:53:06.620163 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-30 00:53:06.620170 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-30 00:53:06.620178 | orchestrator | 2026-03-30 00:53:06.620186 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-30 00:53:06.620193 | orchestrator | Monday 30 March 2026 00:47:15 +0000 (0:00:00.992) 0:00:10.794 ********** 2026-03-30 00:53:06.620202 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-30 00:53:06.620209 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-30 00:53:06.620233 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-30 00:53:06.620241 | orchestrator | 2026-03-30 00:53:06.620247 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-30 00:53:06.620261 | orchestrator | Monday 30 March 2026 00:47:17 +0000 (0:00:01.823) 0:00:12.618 ********** 2026-03-30 00:53:06.620268 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-30 00:53:06.620274 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.620309 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-30 00:53:06.620319 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.620327 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-30 00:53:06.620334 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.620342 | orchestrator | 2026-03-30 00:53:06.620350 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-30 00:53:06.620358 | orchestrator | Monday 30 March 2026 00:47:18 +0000 (0:00:01.522) 0:00:14.140 ********** 2026-03-30 00:53:06.620369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.620472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.620485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.620493 | orchestrator | 2026-03-30 00:53:06.620501 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-30 00:53:06.620508 | orchestrator | Monday 30 March 2026 00:47:20 +0000 (0:00:02.046) 0:00:16.186 ********** 2026-03-30 00:53:06.620515 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.620521 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.620528 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.620534 | orchestrator | 2026-03-30 00:53:06.620541 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-30 00:53:06.620547 | orchestrator | Monday 30 March 2026 00:47:22 +0000 (0:00:01.198) 0:00:17.385 ********** 2026-03-30 00:53:06.620561 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-30 00:53:06.620569 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-30 00:53:06.620575 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-30 00:53:06.620582 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-30 00:53:06.620588 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-30 00:53:06.620601 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-30 00:53:06.620609 | orchestrator | 2026-03-30 00:53:06.620616 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-30 00:53:06.620622 | orchestrator | Monday 30 March 2026 00:47:24 +0000 (0:00:02.652) 0:00:20.037 ********** 2026-03-30 00:53:06.620637 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.620644 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.620666 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.620681 | orchestrator | 2026-03-30 00:53:06.620687 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-30 00:53:06.620702 | orchestrator | Monday 30 March 2026 00:47:25 +0000 (0:00:00.989) 0:00:21.027 ********** 2026-03-30 00:53:06.620709 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.620717 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.620724 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.620730 | orchestrator | 2026-03-30 00:53:06.620737 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-30 00:53:06.620744 | orchestrator | Monday 30 March 2026 00:47:27 +0000 (0:00:01.890) 0:00:22.917 ********** 2026-03-30 00:53:06.620751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.620766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.620773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.620837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-30 00:53:06.620848 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.620855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.620863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.620878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.620885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.620898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.620906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-30 00:53:06.620912 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.620923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.620931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-30 00:53:06.620943 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.620950 | orchestrator | 2026-03-30 00:53:06.620956 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-30 00:53:06.620963 | orchestrator | Monday 30 March 2026 00:47:29 +0000 (0:00:01.463) 0:00:24.381 ********** 2026-03-30 00:53:06.620971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.620997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.621014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-30 00:53:06.621030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.621044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-30 00:53:06.621055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.621082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca', '__omit_place_holder__c1d56d2e4ad50111cd4f72e9da3c79bc2749adca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-30 00:53:06.621112 | orchestrator | 2026-03-30 00:53:06.621119 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-30 00:53:06.621137 | orchestrator | Monday 30 March 2026 00:47:32 +0000 (0:00:03.205) 0:00:27.586 ********** 2026-03-30 00:53:06.621151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.621246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.621252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.621264 | orchestrator | 2026-03-30 00:53:06.621271 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-30 00:53:06.621278 | orchestrator | Monday 30 March 2026 00:47:35 +0000 (0:00:03.330) 0:00:30.917 ********** 2026-03-30 00:53:06.621292 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-30 00:53:06.621300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-30 00:53:06.621305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-30 00:53:06.621312 | orchestrator | 2026-03-30 00:53:06.621319 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-30 00:53:06.621325 | orchestrator | Monday 30 March 2026 00:47:38 +0000 (0:00:02.446) 0:00:33.363 ********** 2026-03-30 00:53:06.621339 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-30 00:53:06.621352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-30 00:53:06.621359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-30 00:53:06.621366 | orchestrator | 2026-03-30 00:53:06.621381 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-30 00:53:06.621389 | orchestrator | Monday 30 March 2026 00:47:43 +0000 (0:00:05.894) 0:00:39.258 ********** 2026-03-30 00:53:06.621396 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.621410 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.621417 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.621423 | orchestrator | 2026-03-30 00:53:06.621429 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-30 00:53:06.621436 | orchestrator | Monday 30 March 2026 00:47:44 +0000 (0:00:00.785) 0:00:40.043 ********** 2026-03-30 00:53:06.621449 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-30 00:53:06.621457 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-30 00:53:06.621464 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-30 00:53:06.621471 | orchestrator | 2026-03-30 00:53:06.621478 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-30 00:53:06.621484 | orchestrator | Monday 30 March 2026 00:47:46 +0000 (0:00:02.070) 0:00:42.113 ********** 2026-03-30 00:53:06.621491 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-30 00:53:06.621498 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-30 00:53:06.621508 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-30 00:53:06.621515 | orchestrator | 2026-03-30 00:53:06.621521 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-30 00:53:06.621527 | orchestrator | Monday 30 March 2026 00:47:48 +0000 (0:00:01.603) 0:00:43.717 ********** 2026-03-30 00:53:06.621542 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-30 00:53:06.621548 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-30 00:53:06.621555 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-30 00:53:06.621562 | orchestrator | 2026-03-30 00:53:06.621568 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-30 00:53:06.621575 | orchestrator | Monday 30 March 2026 00:47:50 +0000 (0:00:01.693) 0:00:45.411 ********** 2026-03-30 00:53:06.621582 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-30 00:53:06.621589 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-30 00:53:06.621595 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-30 00:53:06.621602 | orchestrator | 2026-03-30 00:53:06.621608 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-30 00:53:06.621615 | orchestrator | Monday 30 March 2026 00:47:51 +0000 (0:00:01.765) 0:00:47.176 ********** 2026-03-30 00:53:06.621622 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.621631 | orchestrator | 2026-03-30 00:53:06.621638 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-30 00:53:06.621644 | orchestrator | Monday 30 March 2026 00:47:52 +0000 (0:00:00.651) 0:00:47.827 ********** 2026-03-30 00:53:06.621651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.621759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.621766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.621773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.621803 | orchestrator | 2026-03-30 00:53:06.621817 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-30 00:53:06.621824 | orchestrator | Monday 30 March 2026 00:47:55 +0000 (0:00:02.908) 0:00:50.736 ********** 2026-03-30 00:53:06.621837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.621844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.621855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.621862 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.621869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.621884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.621891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.621904 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.621919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.621930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.621945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.621952 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.621959 | orchestrator | 2026-03-30 00:53:06.621976 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-30 00:53:06.621987 | orchestrator | Monday 30 March 2026 00:47:56 +0000 (0:00:00.754) 0:00:51.490 ********** 2026-03-30 00:53:06.621994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622070 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.622078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622120 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.622131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622159 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.622166 | orchestrator | 2026-03-30 00:53:06.622173 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-30 00:53:06.622180 | orchestrator | Monday 30 March 2026 00:47:57 +0000 (0:00:01.098) 0:00:52.589 ********** 2026-03-30 00:53:06.622188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622214 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.622221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622254 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.622260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622283 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.622289 | orchestrator | 2026-03-30 00:53:06.622296 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-30 00:53:06.622303 | orchestrator | Monday 30 March 2026 00:47:58 +0000 (0:00:00.958) 0:00:53.547 ********** 2026-03-30 00:53:06.622310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622339 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.622345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622366 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.622495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622612 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.622619 | orchestrator | 2026-03-30 00:53:06.622673 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-30 00:53:06.622698 | orchestrator | Monday 30 March 2026 00:47:58 +0000 (0:00:00.705) 0:00:54.253 ********** 2026-03-30 00:53:06.622705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622724 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.622893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622924 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.622931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.622948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.622955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.622961 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.622967 | orchestrator | 2026-03-30 00:53:06.623660 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-30 00:53:06.623705 | orchestrator | Monday 30 March 2026 00:48:00 +0000 (0:00:01.078) 0:00:55.331 ********** 2026-03-30 00:53:06.623718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624228 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.624244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624276 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.624282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624384 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.624388 | orchestrator | 2026-03-30 00:53:06.624393 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-30 00:53:06.624398 | orchestrator | Monday 30 March 2026 00:48:00 +0000 (0:00:00.666) 0:00:55.998 ********** 2026-03-30 00:53:06.624412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624424 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.624428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624475 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.624485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624504 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.624508 | orchestrator | 2026-03-30 00:53:06.624512 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-30 00:53:06.624515 | orchestrator | Monday 30 March 2026 00:48:01 +0000 (0:00:00.563) 0:00:56.561 ********** 2026-03-30 00:53:06.624519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624531 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.624564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624591 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.624595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-30 00:53:06.624599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-30 00:53:06.624603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-30 00:53:06.624607 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.624611 | orchestrator | 2026-03-30 00:53:06.624616 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-30 00:53:06.624620 | orchestrator | Monday 30 March 2026 00:48:02 +0000 (0:00:01.506) 0:00:58.068 ********** 2026-03-30 00:53:06.624624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-30 00:53:06.624641 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-30 00:53:06.624688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-30 00:53:06.624694 | orchestrator | 2026-03-30 00:53:06.624697 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-30 00:53:06.624701 | orchestrator | Monday 30 March 2026 00:48:05 +0000 (0:00:02.538) 0:01:00.607 ********** 2026-03-30 00:53:06.624705 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-30 00:53:06.624709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-30 00:53:06.624713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-30 00:53:06.624716 | orchestrator | 2026-03-30 00:53:06.624720 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-30 00:53:06.624724 | orchestrator | Monday 30 March 2026 00:48:06 +0000 (0:00:01.404) 0:01:02.011 ********** 2026-03-30 00:53:06.624734 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-30 00:53:06.624738 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-30 00:53:06.624742 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-30 00:53:06.624751 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-30 00:53:06.626131 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.626160 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-30 00:53:06.626175 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.626180 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-30 00:53:06.626184 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.626188 | orchestrator | 2026-03-30 00:53:06.626192 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-30 00:53:06.626197 | orchestrator | Monday 30 March 2026 00:48:07 +0000 (0:00:01.197) 0:01:03.208 ********** 2026-03-30 00:53:06.626202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.626209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.626214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-30 00:53:06.626239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.626243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.626257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-30 00:53:06.626261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.626266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.626270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-30 00:53:06.626274 | orchestrator | 2026-03-30 00:53:06.626278 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-30 00:53:06.626282 | orchestrator | Monday 30 March 2026 00:48:10 +0000 (0:00:02.827) 0:01:06.036 ********** 2026-03-30 00:53:06.626289 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.626293 | orchestrator | 2026-03-30 00:53:06.626297 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-30 00:53:06.626300 | orchestrator | Monday 30 March 2026 00:48:11 +0000 (0:00:00.650) 0:01:06.686 ********** 2026-03-30 00:53:06.626305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-30 00:53:06.626315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.626320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-30 00:53:06.626334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.626341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-30 00:53:06.626359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.626363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626374 | orchestrator | 2026-03-30 00:53:06.626378 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-30 00:53:06.626382 | orchestrator | Monday 30 March 2026 00:48:17 +0000 (0:00:06.317) 0:01:13.003 ********** 2026-03-30 00:53:06.626386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-30 00:53:06.626395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.626399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626409 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.626413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-30 00:53:06.626422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-30 00:53:06.626429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.626440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.626447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626463 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.626467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626478 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.626482 | orchestrator | 2026-03-30 00:53:06.626485 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-30 00:53:06.626489 | orchestrator | Monday 30 March 2026 00:48:18 +0000 (0:00:01.001) 0:01:14.005 ********** 2026-03-30 00:53:06.626494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-30 00:53:06.626498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-30 00:53:06.626503 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.626508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-30 00:53:06.626514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-30 00:53:06.626520 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.626526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-30 00:53:06.626532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-30 00:53:06.626538 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.626544 | orchestrator | 2026-03-30 00:53:06.626554 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-30 00:53:06.626560 | orchestrator | Monday 30 March 2026 00:48:19 +0000 (0:00:01.033) 0:01:15.038 ********** 2026-03-30 00:53:06.626566 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.626571 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.626575 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.626579 | orchestrator | 2026-03-30 00:53:06.626583 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-30 00:53:06.626587 | orchestrator | Monday 30 March 2026 00:48:21 +0000 (0:00:01.551) 0:01:16.589 ********** 2026-03-30 00:53:06.626590 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.626594 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.626598 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.626602 | orchestrator | 2026-03-30 00:53:06.626605 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-30 00:53:06.626609 | orchestrator | Monday 30 March 2026 00:48:23 +0000 (0:00:01.792) 0:01:18.382 ********** 2026-03-30 00:53:06.626613 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.626617 | orchestrator | 2026-03-30 00:53:06.626620 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-30 00:53:06.626624 | orchestrator | Monday 30 March 2026 00:48:23 +0000 (0:00:00.620) 0:01:19.002 ********** 2026-03-30 00:53:06.626631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.626640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.626657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.626674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626682 | orchestrator | 2026-03-30 00:53:06.626686 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-30 00:53:06.626690 | orchestrator | Monday 30 March 2026 00:48:30 +0000 (0:00:06.439) 0:01:25.442 ********** 2026-03-30 00:53:06.626697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.626702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626730 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.626736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.626742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626754 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.626773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.626857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.626867 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.626871 | orchestrator | 2026-03-30 00:53:06.626875 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-30 00:53:06.626879 | orchestrator | Monday 30 March 2026 00:48:31 +0000 (0:00:01.407) 0:01:26.850 ********** 2026-03-30 00:53:06.626883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-30 00:53:06.626887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-30 00:53:06.626891 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.626895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-30 00:53:06.626899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-30 00:53:06.626903 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.626907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-30 00:53:06.626911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-30 00:53:06.626915 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.626918 | orchestrator | 2026-03-30 00:53:06.626922 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-30 00:53:06.626926 | orchestrator | Monday 30 March 2026 00:48:32 +0000 (0:00:01.213) 0:01:28.063 ********** 2026-03-30 00:53:06.626930 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.626934 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.626937 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.626942 | orchestrator | 2026-03-30 00:53:06.626948 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-30 00:53:06.626963 | orchestrator | Monday 30 March 2026 00:48:35 +0000 (0:00:02.965) 0:01:31.029 ********** 2026-03-30 00:53:06.626970 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.626977 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.626982 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.626989 | orchestrator | 2026-03-30 00:53:06.627001 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-30 00:53:06.627007 | orchestrator | Monday 30 March 2026 00:48:37 +0000 (0:00:02.152) 0:01:33.181 ********** 2026-03-30 00:53:06.627012 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627018 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627024 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627030 | orchestrator | 2026-03-30 00:53:06.627036 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-30 00:53:06.627042 | orchestrator | Monday 30 March 2026 00:48:38 +0000 (0:00:00.259) 0:01:33.440 ********** 2026-03-30 00:53:06.627048 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.627054 | orchestrator | 2026-03-30 00:53:06.627060 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-30 00:53:06.627066 | orchestrator | Monday 30 March 2026 00:48:38 +0000 (0:00:00.802) 0:01:34.243 ********** 2026-03-30 00:53:06.627077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-30 00:53:06.627086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-30 00:53:06.627090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-30 00:53:06.627094 | orchestrator | 2026-03-30 00:53:06.627098 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-30 00:53:06.627108 | orchestrator | Monday 30 March 2026 00:48:41 +0000 (0:00:02.863) 0:01:37.107 ********** 2026-03-30 00:53:06.627119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-30 00:53:06.627125 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-30 00:53:06.627163 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-30 00:53:06.627179 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627183 | orchestrator | 2026-03-30 00:53:06.627187 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-30 00:53:06.627190 | orchestrator | Monday 30 March 2026 00:48:43 +0000 (0:00:02.028) 0:01:39.135 ********** 2026-03-30 00:53:06.627195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-30 00:53:06.627202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-30 00:53:06.627212 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-30 00:53:06.627220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-30 00:53:06.627224 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-30 00:53:06.627236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-30 00:53:06.627240 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627243 | orchestrator | 2026-03-30 00:53:06.627247 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-30 00:53:06.627251 | orchestrator | Monday 30 March 2026 00:48:46 +0000 (0:00:02.226) 0:01:41.362 ********** 2026-03-30 00:53:06.627255 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627259 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627262 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627266 | orchestrator | 2026-03-30 00:53:06.627270 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-30 00:53:06.627274 | orchestrator | Monday 30 March 2026 00:48:46 +0000 (0:00:00.428) 0:01:41.790 ********** 2026-03-30 00:53:06.627278 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627281 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627285 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627289 | orchestrator | 2026-03-30 00:53:06.627293 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-30 00:53:06.627296 | orchestrator | Monday 30 March 2026 00:48:47 +0000 (0:00:01.117) 0:01:42.908 ********** 2026-03-30 00:53:06.627303 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.627308 | orchestrator | 2026-03-30 00:53:06.627312 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-30 00:53:06.627316 | orchestrator | Monday 30 March 2026 00:48:48 +0000 (0:00:00.780) 0:01:43.688 ********** 2026-03-30 00:53:06.627320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.627328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.627353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.627376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627394 | orchestrator | 2026-03-30 00:53:06.627398 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-30 00:53:06.627402 | orchestrator | Monday 30 March 2026 00:48:52 +0000 (0:00:03.690) 0:01:47.379 ********** 2026-03-30 00:53:06.627406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.627410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627426 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.627479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.627487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627514 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627522 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627526 | orchestrator | 2026-03-30 00:53:06.627530 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-30 00:53:06.627534 | orchestrator | Monday 30 March 2026 00:48:52 +0000 (0:00:00.591) 0:01:47.970 ********** 2026-03-30 00:53:06.627538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-30 00:53:06.627542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-30 00:53:06.627546 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-30 00:53:06.627554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-30 00:53:06.627558 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-30 00:53:06.627568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-30 00:53:06.627572 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627575 | orchestrator | 2026-03-30 00:53:06.627579 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-30 00:53:06.627583 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.982) 0:01:48.952 ********** 2026-03-30 00:53:06.627587 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.627591 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.627595 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.627598 | orchestrator | 2026-03-30 00:53:06.627602 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-30 00:53:06.627606 | orchestrator | Monday 30 March 2026 00:48:55 +0000 (0:00:01.339) 0:01:50.291 ********** 2026-03-30 00:53:06.627613 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.627617 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.627620 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.627624 | orchestrator | 2026-03-30 00:53:06.627628 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-30 00:53:06.627631 | orchestrator | Monday 30 March 2026 00:48:56 +0000 (0:00:01.964) 0:01:52.255 ********** 2026-03-30 00:53:06.627635 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627639 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627643 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627647 | orchestrator | 2026-03-30 00:53:06.627651 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-30 00:53:06.627654 | orchestrator | Monday 30 March 2026 00:48:57 +0000 (0:00:00.316) 0:01:52.572 ********** 2026-03-30 00:53:06.627658 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.627664 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.627668 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627671 | orchestrator | 2026-03-30 00:53:06.627675 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-30 00:53:06.627679 | orchestrator | Monday 30 March 2026 00:48:57 +0000 (0:00:00.257) 0:01:52.830 ********** 2026-03-30 00:53:06.627683 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.627687 | orchestrator | 2026-03-30 00:53:06.627690 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-30 00:53:06.627696 | orchestrator | Monday 30 March 2026 00:48:58 +0000 (0:00:00.901) 0:01:53.731 ********** 2026-03-30 00:53:06.627703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 00:53:06.627710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 00:53:06.627716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 00:53:06.627738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 00:53:06.627754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 00:53:06.627845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 00:53:06.627857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627881 | orchestrator | 2026-03-30 00:53:06.627885 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-30 00:53:06.627889 | orchestrator | Monday 30 March 2026 00:49:02 +0000 (0:00:04.350) 0:01:58.082 ********** 2026-03-30 00:53:06.627893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 00:53:06.627903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 00:53:06.627910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 00:53:06.627914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 00:53:06.627922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 00:53:06.627943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 00:53:06.627958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627983 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.627987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.627997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.628002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.628010 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.628020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.628024 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628028 | orchestrator | 2026-03-30 00:53:06.628032 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-30 00:53:06.628036 | orchestrator | Monday 30 March 2026 00:49:03 +0000 (0:00:00.890) 0:01:58.972 ********** 2026-03-30 00:53:06.628040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-30 00:53:06.628044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-30 00:53:06.628050 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-30 00:53:06.628058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-30 00:53:06.628061 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-30 00:53:06.628072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-30 00:53:06.628076 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628080 | orchestrator | 2026-03-30 00:53:06.628084 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-30 00:53:06.628088 | orchestrator | Monday 30 March 2026 00:49:04 +0000 (0:00:01.184) 0:02:00.157 ********** 2026-03-30 00:53:06.628092 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.628096 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.628100 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.628104 | orchestrator | 2026-03-30 00:53:06.628108 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-30 00:53:06.628111 | orchestrator | Monday 30 March 2026 00:49:06 +0000 (0:00:01.329) 0:02:01.486 ********** 2026-03-30 00:53:06.628118 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.628122 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.628126 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.628130 | orchestrator | 2026-03-30 00:53:06.628134 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-30 00:53:06.628138 | orchestrator | Monday 30 March 2026 00:49:08 +0000 (0:00:01.946) 0:02:03.433 ********** 2026-03-30 00:53:06.628142 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628146 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628149 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628153 | orchestrator | 2026-03-30 00:53:06.628158 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-30 00:53:06.628162 | orchestrator | Monday 30 March 2026 00:49:08 +0000 (0:00:00.254) 0:02:03.688 ********** 2026-03-30 00:53:06.628165 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.628170 | orchestrator | 2026-03-30 00:53:06.628174 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-30 00:53:06.628178 | orchestrator | Monday 30 March 2026 00:49:09 +0000 (0:00:00.835) 0:02:04.523 ********** 2026-03-30 00:53:06.628398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 00:53:06.628432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.628449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 00:53:06.628467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.628480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 00:53:06.628496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.628503 | orchestrator | 2026-03-30 00:53:06.628510 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-30 00:53:06.628521 | orchestrator | Monday 30 March 2026 00:49:12 +0000 (0:00:03.579) 0:02:08.103 ********** 2026-03-30 00:53:06.628529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 00:53:06.628540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.628548 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 00:53:06.628570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.628574 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 00:53:06.628596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.628601 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628605 | orchestrator | 2026-03-30 00:53:06.628609 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-30 00:53:06.628613 | orchestrator | Monday 30 March 2026 00:49:15 +0000 (0:00:02.478) 0:02:10.581 ********** 2026-03-30 00:53:06.628618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-30 00:53:06.628623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-30 00:53:06.628636 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-30 00:53:06.628648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-30 00:53:06.628652 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-30 00:53:06.628661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-30 00:53:06.628664 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628668 | orchestrator | 2026-03-30 00:53:06.628672 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-30 00:53:06.628676 | orchestrator | Monday 30 March 2026 00:49:18 +0000 (0:00:02.779) 0:02:13.361 ********** 2026-03-30 00:53:06.628679 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.628683 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.628687 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.628691 | orchestrator | 2026-03-30 00:53:06.628695 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-30 00:53:06.628699 | orchestrator | Monday 30 March 2026 00:49:19 +0000 (0:00:01.282) 0:02:14.643 ********** 2026-03-30 00:53:06.628703 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.628708 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.628715 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.628720 | orchestrator | 2026-03-30 00:53:06.628730 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-30 00:53:06.628741 | orchestrator | Monday 30 March 2026 00:49:21 +0000 (0:00:01.752) 0:02:16.395 ********** 2026-03-30 00:53:06.628747 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628753 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628759 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628764 | orchestrator | 2026-03-30 00:53:06.628771 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-30 00:53:06.628783 | orchestrator | Monday 30 March 2026 00:49:21 +0000 (0:00:00.274) 0:02:16.670 ********** 2026-03-30 00:53:06.628809 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.628815 | orchestrator | 2026-03-30 00:53:06.628821 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-30 00:53:06.628828 | orchestrator | Monday 30 March 2026 00:49:22 +0000 (0:00:00.901) 0:02:17.571 ********** 2026-03-30 00:53:06.628834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 00:53:06.628846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 00:53:06.628854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 00:53:06.628861 | orchestrator | 2026-03-30 00:53:06.628868 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-30 00:53:06.628873 | orchestrator | Monday 30 March 2026 00:49:25 +0000 (0:00:02.741) 0:02:20.313 ********** 2026-03-30 00:53:06.628880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 00:53:06.628886 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 00:53:06.628907 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 00:53:06.628915 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628919 | orchestrator | 2026-03-30 00:53:06.628923 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-30 00:53:06.628927 | orchestrator | Monday 30 March 2026 00:49:25 +0000 (0:00:00.355) 0:02:20.668 ********** 2026-03-30 00:53:06.628932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-30 00:53:06.628940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-30 00:53:06.628946 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.628950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-30 00:53:06.628954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-30 00:53:06.628958 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.628961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-30 00:53:06.628965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-30 00:53:06.628969 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.628973 | orchestrator | 2026-03-30 00:53:06.628977 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-30 00:53:06.628981 | orchestrator | Monday 30 March 2026 00:49:26 +0000 (0:00:00.710) 0:02:21.379 ********** 2026-03-30 00:53:06.628985 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.628990 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.628995 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.628999 | orchestrator | 2026-03-30 00:53:06.629003 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-30 00:53:06.629008 | orchestrator | Monday 30 March 2026 00:49:27 +0000 (0:00:01.382) 0:02:22.761 ********** 2026-03-30 00:53:06.629012 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.629017 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.629022 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.629026 | orchestrator | 2026-03-30 00:53:06.629030 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-30 00:53:06.629034 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:02.111) 0:02:24.873 ********** 2026-03-30 00:53:06.629039 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629047 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629052 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629056 | orchestrator | 2026-03-30 00:53:06.629061 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-30 00:53:06.629065 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:00.299) 0:02:25.173 ********** 2026-03-30 00:53:06.629070 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.629074 | orchestrator | 2026-03-30 00:53:06.629079 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-30 00:53:06.629083 | orchestrator | Monday 30 March 2026 00:49:30 +0000 (0:00:01.029) 0:02:26.202 ********** 2026-03-30 00:53:06.629096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:53:06.629103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:53:06.629121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:53:06.629127 | orchestrator | 2026-03-30 00:53:06.629132 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-30 00:53:06.629136 | orchestrator | Monday 30 March 2026 00:49:34 +0000 (0:00:03.079) 0:02:29.282 ********** 2026-03-30 00:53:06.629144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:53:06.629154 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:53:06.629167 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:53:06.629185 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629189 | orchestrator | 2026-03-30 00:53:06.629194 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-30 00:53:06.629198 | orchestrator | Monday 30 March 2026 00:49:34 +0000 (0:00:00.650) 0:02:29.933 ********** 2026-03-30 00:53:06.629204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-30 00:53:06.629209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-30 00:53:06.629219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-30 00:53:06.629224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-30 00:53:06.629230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-30 00:53:06.629238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-30 00:53:06.629244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-30 00:53:06.629249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-30 00:53:06.629254 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-30 00:53:06.629263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-30 00:53:06.629268 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-30 00:53:06.629280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-30 00:53:06.629284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-30 00:53:06.629289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-30 00:53:06.629294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-30 00:53:06.629298 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629303 | orchestrator | 2026-03-30 00:53:06.629307 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-30 00:53:06.629312 | orchestrator | Monday 30 March 2026 00:49:35 +0000 (0:00:00.962) 0:02:30.895 ********** 2026-03-30 00:53:06.629316 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.629321 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.629325 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.629329 | orchestrator | 2026-03-30 00:53:06.629334 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-30 00:53:06.629339 | orchestrator | Monday 30 March 2026 00:49:37 +0000 (0:00:01.410) 0:02:32.305 ********** 2026-03-30 00:53:06.629343 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.629352 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.629356 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.629361 | orchestrator | 2026-03-30 00:53:06.629366 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-30 00:53:06.629370 | orchestrator | Monday 30 March 2026 00:49:38 +0000 (0:00:01.839) 0:02:34.145 ********** 2026-03-30 00:53:06.629375 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629379 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629383 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629386 | orchestrator | 2026-03-30 00:53:06.629391 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-30 00:53:06.629437 | orchestrator | Monday 30 March 2026 00:49:39 +0000 (0:00:00.264) 0:02:34.410 ********** 2026-03-30 00:53:06.629447 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629451 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629455 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629459 | orchestrator | 2026-03-30 00:53:06.629463 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-30 00:53:06.629467 | orchestrator | Monday 30 March 2026 00:49:39 +0000 (0:00:00.293) 0:02:34.704 ********** 2026-03-30 00:53:06.629471 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.629475 | orchestrator | 2026-03-30 00:53:06.629478 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-30 00:53:06.629482 | orchestrator | Monday 30 March 2026 00:49:40 +0000 (0:00:00.991) 0:02:35.695 ********** 2026-03-30 00:53:06.629487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:53:06.629496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:53:06.629501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:53:06.629508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:53:06.629516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:53:06.629521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:53:06.629525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:53:06.629533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:53:06.629537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:53:06.629545 | orchestrator | 2026-03-30 00:53:06.629549 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-30 00:53:06.629553 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:03.404) 0:02:39.100 ********** 2026-03-30 00:53:06.629560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:53:06.629565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:53:06.629569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:53:06.629576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:53:06.629584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:53:06.629588 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:53:06.629599 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:53:06.629607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:53:06.629611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:53:06.629615 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629619 | orchestrator | 2026-03-30 00:53:06.629623 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-30 00:53:06.629748 | orchestrator | Monday 30 March 2026 00:49:44 +0000 (0:00:00.687) 0:02:39.788 ********** 2026-03-30 00:53:06.629762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-30 00:53:06.629777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-30 00:53:06.629824 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-30 00:53:06.629836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-30 00:53:06.629839 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-30 00:53:06.629851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-30 00:53:06.629856 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629859 | orchestrator | 2026-03-30 00:53:06.629864 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-30 00:53:06.629867 | orchestrator | Monday 30 March 2026 00:49:45 +0000 (0:00:00.889) 0:02:40.677 ********** 2026-03-30 00:53:06.629871 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.629875 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.629879 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.629883 | orchestrator | 2026-03-30 00:53:06.629887 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-30 00:53:06.629891 | orchestrator | Monday 30 March 2026 00:49:46 +0000 (0:00:01.194) 0:02:41.873 ********** 2026-03-30 00:53:06.629895 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.629898 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.629902 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.629906 | orchestrator | 2026-03-30 00:53:06.629910 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-30 00:53:06.629914 | orchestrator | Monday 30 March 2026 00:49:48 +0000 (0:00:01.789) 0:02:43.662 ********** 2026-03-30 00:53:06.629918 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.629921 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.629925 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.629929 | orchestrator | 2026-03-30 00:53:06.629933 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-30 00:53:06.629937 | orchestrator | Monday 30 March 2026 00:49:48 +0000 (0:00:00.259) 0:02:43.922 ********** 2026-03-30 00:53:06.629941 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.629945 | orchestrator | 2026-03-30 00:53:06.629949 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-30 00:53:06.629953 | orchestrator | Monday 30 March 2026 00:49:49 +0000 (0:00:01.035) 0:02:44.957 ********** 2026-03-30 00:53:06.629957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 00:53:06.630010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 00:53:06.630069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 00:53:06.630106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630110 | orchestrator | 2026-03-30 00:53:06.630114 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-30 00:53:06.630118 | orchestrator | Monday 30 March 2026 00:49:53 +0000 (0:00:04.164) 0:02:49.122 ********** 2026-03-30 00:53:06.630156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 00:53:06.630166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630170 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.630174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 00:53:06.630178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630186 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.630225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 00:53:06.630235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630242 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.630248 | orchestrator | 2026-03-30 00:53:06.630254 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-30 00:53:06.630272 | orchestrator | Monday 30 March 2026 00:49:54 +0000 (0:00:00.661) 0:02:49.784 ********** 2026-03-30 00:53:06.630279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-30 00:53:06.630293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-30 00:53:06.630299 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.630306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-30 00:53:06.630311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-30 00:53:06.630316 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.630322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-30 00:53:06.630328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-30 00:53:06.630335 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.630341 | orchestrator | 2026-03-30 00:53:06.630353 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-30 00:53:06.630360 | orchestrator | Monday 30 March 2026 00:49:55 +0000 (0:00:00.970) 0:02:50.754 ********** 2026-03-30 00:53:06.630366 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.630372 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.630378 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.630384 | orchestrator | 2026-03-30 00:53:06.630390 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-30 00:53:06.630396 | orchestrator | Monday 30 March 2026 00:49:56 +0000 (0:00:01.266) 0:02:52.021 ********** 2026-03-30 00:53:06.630402 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.630408 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.630415 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.630422 | orchestrator | 2026-03-30 00:53:06.630426 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-30 00:53:06.630429 | orchestrator | Monday 30 March 2026 00:49:58 +0000 (0:00:01.981) 0:02:54.002 ********** 2026-03-30 00:53:06.630433 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.630437 | orchestrator | 2026-03-30 00:53:06.630441 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-30 00:53:06.630444 | orchestrator | Monday 30 March 2026 00:49:59 +0000 (0:00:01.067) 0:02:55.070 ********** 2026-03-30 00:53:06.630449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-30 00:53:06.630511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-30 00:53:06.630542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-30 00:53:06.630593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630611 | orchestrator | 2026-03-30 00:53:06.630614 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-30 00:53:06.630619 | orchestrator | Monday 30 March 2026 00:50:04 +0000 (0:00:05.053) 0:03:00.124 ********** 2026-03-30 00:53:06.630650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-30 00:53:06.630656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630675 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.630680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-30 00:53:06.630684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630740 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.630751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-30 00:53:06.630764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.630802 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.630808 | orchestrator | 2026-03-30 00:53:06.630812 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-30 00:53:06.630816 | orchestrator | Monday 30 March 2026 00:50:05 +0000 (0:00:00.943) 0:03:01.068 ********** 2026-03-30 00:53:06.630820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-30 00:53:06.630825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-30 00:53:06.630829 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.630847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-30 00:53:06.630892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-30 00:53:06.630902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-30 00:53:06.630908 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.630914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-30 00:53:06.630926 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.630932 | orchestrator | 2026-03-30 00:53:06.630938 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-30 00:53:06.630943 | orchestrator | Monday 30 March 2026 00:50:06 +0000 (0:00:01.162) 0:03:02.230 ********** 2026-03-30 00:53:06.630949 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.630955 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.630960 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.630966 | orchestrator | 2026-03-30 00:53:06.630971 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-30 00:53:06.630989 | orchestrator | Monday 30 March 2026 00:50:08 +0000 (0:00:01.580) 0:03:03.811 ********** 2026-03-30 00:53:06.630995 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.631001 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.631006 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.631012 | orchestrator | 2026-03-30 00:53:06.631018 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-30 00:53:06.631028 | orchestrator | Monday 30 March 2026 00:50:10 +0000 (0:00:02.322) 0:03:06.134 ********** 2026-03-30 00:53:06.631035 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.631041 | orchestrator | 2026-03-30 00:53:06.631046 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-30 00:53:06.631052 | orchestrator | Monday 30 March 2026 00:50:12 +0000 (0:00:01.597) 0:03:07.731 ********** 2026-03-30 00:53:06.631060 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-30 00:53:06.631066 | orchestrator | 2026-03-30 00:53:06.631072 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-30 00:53:06.631079 | orchestrator | Monday 30 March 2026 00:50:15 +0000 (0:00:03.326) 0:03:11.057 ********** 2026-03-30 00:53:06.631088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:53:06.631144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-30 00:53:06.631163 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:53:06.631182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-30 00:53:06.631188 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:53:06.631244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-30 00:53:06.631248 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631252 | orchestrator | 2026-03-30 00:53:06.631256 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-30 00:53:06.631260 | orchestrator | Monday 30 March 2026 00:50:18 +0000 (0:00:02.919) 0:03:13.977 ********** 2026-03-30 00:53:06.631270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:53:06.631274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-30 00:53:06.631282 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:53:06.631327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-30 00:53:06.631331 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:53:06.631375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-30 00:53:06.631383 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631389 | orchestrator | 2026-03-30 00:53:06.631395 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-30 00:53:06.631401 | orchestrator | Monday 30 March 2026 00:50:21 +0000 (0:00:02.955) 0:03:16.932 ********** 2026-03-30 00:53:06.631407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-30 00:53:06.631418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-30 00:53:06.631425 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-30 00:53:06.631454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-30 00:53:06.631460 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-30 00:53:06.631525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-30 00:53:06.631535 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631544 | orchestrator | 2026-03-30 00:53:06.631551 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-30 00:53:06.631569 | orchestrator | Monday 30 March 2026 00:50:24 +0000 (0:00:02.442) 0:03:19.375 ********** 2026-03-30 00:53:06.631577 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.631583 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.631589 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.631595 | orchestrator | 2026-03-30 00:53:06.631601 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-30 00:53:06.631607 | orchestrator | Monday 30 March 2026 00:50:25 +0000 (0:00:01.864) 0:03:21.240 ********** 2026-03-30 00:53:06.631613 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631618 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631624 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631630 | orchestrator | 2026-03-30 00:53:06.631638 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-30 00:53:06.631644 | orchestrator | Monday 30 March 2026 00:50:27 +0000 (0:00:01.633) 0:03:22.873 ********** 2026-03-30 00:53:06.631651 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631658 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631664 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631670 | orchestrator | 2026-03-30 00:53:06.631676 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-30 00:53:06.631683 | orchestrator | Monday 30 March 2026 00:50:27 +0000 (0:00:00.284) 0:03:23.157 ********** 2026-03-30 00:53:06.631689 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.631694 | orchestrator | 2026-03-30 00:53:06.631700 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-30 00:53:06.631710 | orchestrator | Monday 30 March 2026 00:50:29 +0000 (0:00:01.288) 0:03:24.446 ********** 2026-03-30 00:53:06.631718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-30 00:53:06.631725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-30 00:53:06.631739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-30 00:53:06.631746 | orchestrator | 2026-03-30 00:53:06.631752 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-30 00:53:06.631758 | orchestrator | Monday 30 March 2026 00:50:30 +0000 (0:00:01.539) 0:03:25.985 ********** 2026-03-30 00:53:06.631844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-30 00:53:06.631856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-30 00:53:06.631863 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631869 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-30 00:53:06.631894 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631901 | orchestrator | 2026-03-30 00:53:06.631906 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-30 00:53:06.631912 | orchestrator | Monday 30 March 2026 00:50:31 +0000 (0:00:00.369) 0:03:26.355 ********** 2026-03-30 00:53:06.631919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-30 00:53:06.631926 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-30 00:53:06.631938 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-30 00:53:06.631952 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631958 | orchestrator | 2026-03-30 00:53:06.631964 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-30 00:53:06.631971 | orchestrator | Monday 30 March 2026 00:50:31 +0000 (0:00:00.867) 0:03:27.222 ********** 2026-03-30 00:53:06.631977 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.631984 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.631990 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.631997 | orchestrator | 2026-03-30 00:53:06.632003 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-30 00:53:06.632009 | orchestrator | Monday 30 March 2026 00:50:32 +0000 (0:00:00.385) 0:03:27.607 ********** 2026-03-30 00:53:06.632014 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.632021 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.632028 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.632034 | orchestrator | 2026-03-30 00:53:06.632041 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-30 00:53:06.632047 | orchestrator | Monday 30 March 2026 00:50:33 +0000 (0:00:01.097) 0:03:28.705 ********** 2026-03-30 00:53:06.632054 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.632061 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.632067 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.632073 | orchestrator | 2026-03-30 00:53:06.632079 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-30 00:53:06.632130 | orchestrator | Monday 30 March 2026 00:50:33 +0000 (0:00:00.276) 0:03:28.981 ********** 2026-03-30 00:53:06.632139 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.632145 | orchestrator | 2026-03-30 00:53:06.632151 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-30 00:53:06.632158 | orchestrator | Monday 30 March 2026 00:50:34 +0000 (0:00:01.246) 0:03:30.227 ********** 2026-03-30 00:53:06.632165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 00:53:06.632185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 00:53:06.632200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-30 00:53:06.632289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 00:53:06.632331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-30 00:53:06.632370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.632493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.632582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-30 00:53:06.632601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.632771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.632833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.632904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.632908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.632986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.632994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-30 00:53:06.632998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.633050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633064 | orchestrator | 2026-03-30 00:53:06.633069 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-30 00:53:06.633074 | orchestrator | Monday 30 March 2026 00:50:38 +0000 (0:00:03.963) 0:03:34.190 ********** 2026-03-30 00:53:06.633079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 00:53:06.633087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-30 00:53:06.633134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.633259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633265 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.633272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 00:53:06.633330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 00:53:06.633359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-30 00:53:06.633456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-30 00:53:06.633520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.633662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-30 00:53:06.633668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.633681 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.633685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-30 00:53:06.633702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-30 00:53:06.633709 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.633715 | orchestrator | 2026-03-30 00:53:06.633725 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-30 00:53:06.633731 | orchestrator | Monday 30 March 2026 00:50:40 +0000 (0:00:02.050) 0:03:36.241 ********** 2026-03-30 00:53:06.633738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-30 00:53:06.633746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-30 00:53:06.633752 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.633758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-30 00:53:06.633764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-30 00:53:06.633775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-30 00:53:06.633782 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.633807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-30 00:53:06.633813 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.633819 | orchestrator | 2026-03-30 00:53:06.633825 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-30 00:53:06.633838 | orchestrator | Monday 30 March 2026 00:50:42 +0000 (0:00:01.527) 0:03:37.769 ********** 2026-03-30 00:53:06.633842 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.633857 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.633861 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.633865 | orchestrator | 2026-03-30 00:53:06.633869 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-30 00:53:06.633873 | orchestrator | Monday 30 March 2026 00:50:43 +0000 (0:00:01.247) 0:03:39.016 ********** 2026-03-30 00:53:06.633876 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.633880 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.633884 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.633888 | orchestrator | 2026-03-30 00:53:06.633892 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-30 00:53:06.633896 | orchestrator | Monday 30 March 2026 00:50:45 +0000 (0:00:01.937) 0:03:40.954 ********** 2026-03-30 00:53:06.633899 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.633903 | orchestrator | 2026-03-30 00:53:06.633907 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-30 00:53:06.633911 | orchestrator | Monday 30 March 2026 00:50:46 +0000 (0:00:01.261) 0:03:42.215 ********** 2026-03-30 00:53:06.633915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.633946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.633951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.633959 | orchestrator | 2026-03-30 00:53:06.633966 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-30 00:53:06.633970 | orchestrator | Monday 30 March 2026 00:50:49 +0000 (0:00:03.009) 0:03:45.225 ********** 2026-03-30 00:53:06.633974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.633978 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.633982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.633986 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.634042 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634051 | orchestrator | 2026-03-30 00:53:06.634058 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-30 00:53:06.634063 | orchestrator | Monday 30 March 2026 00:50:50 +0000 (0:00:00.449) 0:03:45.675 ********** 2026-03-30 00:53:06.634069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634090 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.634097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634109 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634130 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634134 | orchestrator | 2026-03-30 00:53:06.634137 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-30 00:53:06.634141 | orchestrator | Monday 30 March 2026 00:50:51 +0000 (0:00:01.080) 0:03:46.756 ********** 2026-03-30 00:53:06.634145 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.634149 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.634153 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.634156 | orchestrator | 2026-03-30 00:53:06.634160 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-30 00:53:06.634164 | orchestrator | Monday 30 March 2026 00:50:52 +0000 (0:00:01.395) 0:03:48.152 ********** 2026-03-30 00:53:06.634168 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.634172 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.634175 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.634179 | orchestrator | 2026-03-30 00:53:06.634183 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-30 00:53:06.634187 | orchestrator | Monday 30 March 2026 00:50:54 +0000 (0:00:01.886) 0:03:50.038 ********** 2026-03-30 00:53:06.634191 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.634194 | orchestrator | 2026-03-30 00:53:06.634198 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-30 00:53:06.634202 | orchestrator | Monday 30 March 2026 00:50:56 +0000 (0:00:01.293) 0:03:51.332 ********** 2026-03-30 00:53:06.634207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.634232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.634258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.634293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634306 | orchestrator | 2026-03-30 00:53:06.634310 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-30 00:53:06.634315 | orchestrator | Monday 30 March 2026 00:50:59 +0000 (0:00:03.915) 0:03:55.247 ********** 2026-03-30 00:53:06.634320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.634325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634351 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.634359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.634365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634377 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.634423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.634438 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634444 | orchestrator | 2026-03-30 00:53:06.634450 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-30 00:53:06.634456 | orchestrator | Monday 30 March 2026 00:51:00 +0000 (0:00:00.566) 0:03:55.814 ********** 2026-03-30 00:53:06.634465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634491 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.634497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634528 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-30 00:53:06.634558 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634564 | orchestrator | 2026-03-30 00:53:06.634570 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-30 00:53:06.634599 | orchestrator | Monday 30 March 2026 00:51:01 +0000 (0:00:00.853) 0:03:56.667 ********** 2026-03-30 00:53:06.634607 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.634611 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.634614 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.634618 | orchestrator | 2026-03-30 00:53:06.634622 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-30 00:53:06.634626 | orchestrator | Monday 30 March 2026 00:51:03 +0000 (0:00:01.610) 0:03:58.277 ********** 2026-03-30 00:53:06.634629 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.634633 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.634637 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.634641 | orchestrator | 2026-03-30 00:53:06.634645 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-30 00:53:06.634648 | orchestrator | Monday 30 March 2026 00:51:04 +0000 (0:00:01.977) 0:04:00.255 ********** 2026-03-30 00:53:06.634652 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.634656 | orchestrator | 2026-03-30 00:53:06.634659 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-30 00:53:06.634663 | orchestrator | Monday 30 March 2026 00:51:06 +0000 (0:00:01.178) 0:04:01.433 ********** 2026-03-30 00:53:06.634667 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-30 00:53:06.634672 | orchestrator | 2026-03-30 00:53:06.634676 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-30 00:53:06.634679 | orchestrator | Monday 30 March 2026 00:51:07 +0000 (0:00:01.129) 0:04:02.563 ********** 2026-03-30 00:53:06.634687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-30 00:53:06.634692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-30 00:53:06.634702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-30 00:53:06.634708 | orchestrator | 2026-03-30 00:53:06.634717 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-30 00:53:06.634726 | orchestrator | Monday 30 March 2026 00:51:10 +0000 (0:00:03.587) 0:04:06.150 ********** 2026-03-30 00:53:06.634731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.634737 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.634744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.634750 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.634807 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634812 | orchestrator | 2026-03-30 00:53:06.634816 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-30 00:53:06.634820 | orchestrator | Monday 30 March 2026 00:51:11 +0000 (0:00:01.111) 0:04:07.262 ********** 2026-03-30 00:53:06.634824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-30 00:53:06.634828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-30 00:53:06.634834 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.634841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-30 00:53:06.634846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-30 00:53:06.634855 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-30 00:53:06.634863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-30 00:53:06.634867 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634870 | orchestrator | 2026-03-30 00:53:06.634874 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-30 00:53:06.634878 | orchestrator | Monday 30 March 2026 00:51:13 +0000 (0:00:01.648) 0:04:08.910 ********** 2026-03-30 00:53:06.634882 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.634886 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.634889 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.634893 | orchestrator | 2026-03-30 00:53:06.634897 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-30 00:53:06.634901 | orchestrator | Monday 30 March 2026 00:51:15 +0000 (0:00:02.277) 0:04:11.188 ********** 2026-03-30 00:53:06.634904 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.634908 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.634912 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.634915 | orchestrator | 2026-03-30 00:53:06.634919 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-30 00:53:06.634923 | orchestrator | Monday 30 March 2026 00:51:18 +0000 (0:00:02.678) 0:04:13.866 ********** 2026-03-30 00:53:06.634928 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-30 00:53:06.634932 | orchestrator | 2026-03-30 00:53:06.634936 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-30 00:53:06.634940 | orchestrator | Monday 30 March 2026 00:51:19 +0000 (0:00:00.734) 0:04:14.601 ********** 2026-03-30 00:53:06.634944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.634948 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.634967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.634972 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.634976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.634985 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.634989 | orchestrator | 2026-03-30 00:53:06.634993 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-30 00:53:06.634997 | orchestrator | Monday 30 March 2026 00:51:20 +0000 (0:00:01.084) 0:04:15.686 ********** 2026-03-30 00:53:06.635004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.635008 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.635016 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-30 00:53:06.635024 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635027 | orchestrator | 2026-03-30 00:53:06.635031 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-30 00:53:06.635035 | orchestrator | Monday 30 March 2026 00:51:21 +0000 (0:00:01.302) 0:04:16.988 ********** 2026-03-30 00:53:06.635039 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635042 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635046 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635050 | orchestrator | 2026-03-30 00:53:06.635054 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-30 00:53:06.635057 | orchestrator | Monday 30 March 2026 00:51:22 +0000 (0:00:01.173) 0:04:18.162 ********** 2026-03-30 00:53:06.635061 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.635067 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.635071 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.635074 | orchestrator | 2026-03-30 00:53:06.635078 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-30 00:53:06.635082 | orchestrator | Monday 30 March 2026 00:51:25 +0000 (0:00:02.227) 0:04:20.390 ********** 2026-03-30 00:53:06.635086 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.635090 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.635096 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.635103 | orchestrator | 2026-03-30 00:53:06.635107 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-30 00:53:06.635110 | orchestrator | Monday 30 March 2026 00:51:27 +0000 (0:00:02.662) 0:04:23.052 ********** 2026-03-30 00:53:06.635114 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-30 00:53:06.635118 | orchestrator | 2026-03-30 00:53:06.635126 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-30 00:53:06.635130 | orchestrator | Monday 30 March 2026 00:51:28 +0000 (0:00:00.741) 0:04:23.794 ********** 2026-03-30 00:53:06.635150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-30 00:53:06.635154 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-30 00:53:06.635162 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-30 00:53:06.635173 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635177 | orchestrator | 2026-03-30 00:53:06.635181 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-30 00:53:06.635185 | orchestrator | Monday 30 March 2026 00:51:29 +0000 (0:00:01.176) 0:04:24.971 ********** 2026-03-30 00:53:06.635189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-30 00:53:06.635193 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-30 00:53:06.635200 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-30 00:53:06.635214 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635218 | orchestrator | 2026-03-30 00:53:06.635221 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-30 00:53:06.635225 | orchestrator | Monday 30 March 2026 00:51:30 +0000 (0:00:01.092) 0:04:26.064 ********** 2026-03-30 00:53:06.635229 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635233 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635236 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635240 | orchestrator | 2026-03-30 00:53:06.635244 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-30 00:53:06.635248 | orchestrator | Monday 30 March 2026 00:51:32 +0000 (0:00:01.315) 0:04:27.379 ********** 2026-03-30 00:53:06.635252 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.635268 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.635273 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.635276 | orchestrator | 2026-03-30 00:53:06.635280 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-30 00:53:06.635284 | orchestrator | Monday 30 March 2026 00:51:34 +0000 (0:00:02.476) 0:04:29.856 ********** 2026-03-30 00:53:06.635288 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.635292 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.635295 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.635299 | orchestrator | 2026-03-30 00:53:06.635303 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-30 00:53:06.635307 | orchestrator | Monday 30 March 2026 00:51:37 +0000 (0:00:03.006) 0:04:32.863 ********** 2026-03-30 00:53:06.635310 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.635314 | orchestrator | 2026-03-30 00:53:06.635318 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-30 00:53:06.635322 | orchestrator | Monday 30 March 2026 00:51:38 +0000 (0:00:01.184) 0:04:34.048 ********** 2026-03-30 00:53:06.635331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.635335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 00:53:06.635339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.635369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.635376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.635380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 00:53:06.635385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 00:53:06.635392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.635429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.635437 | orchestrator | 2026-03-30 00:53:06.635441 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-30 00:53:06.635445 | orchestrator | Monday 30 March 2026 00:51:42 +0000 (0:00:03.233) 0:04:37.281 ********** 2026-03-30 00:53:06.635449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.635453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 00:53:06.635469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.635484 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.635496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 00:53:06.635500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.635525 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.635537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 00:53:06.635545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 00:53:06.635553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 00:53:06.635568 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635572 | orchestrator | 2026-03-30 00:53:06.635576 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-30 00:53:06.635580 | orchestrator | Monday 30 March 2026 00:51:42 +0000 (0:00:00.884) 0:04:38.165 ********** 2026-03-30 00:53:06.635584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-30 00:53:06.635588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-30 00:53:06.635592 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-30 00:53:06.635600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-30 00:53:06.635604 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-30 00:53:06.635618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-30 00:53:06.635621 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635625 | orchestrator | 2026-03-30 00:53:06.635629 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-30 00:53:06.635633 | orchestrator | Monday 30 March 2026 00:51:43 +0000 (0:00:00.815) 0:04:38.981 ********** 2026-03-30 00:53:06.635637 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.635640 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.635644 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.635648 | orchestrator | 2026-03-30 00:53:06.635652 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-30 00:53:06.635656 | orchestrator | Monday 30 March 2026 00:51:45 +0000 (0:00:01.404) 0:04:40.385 ********** 2026-03-30 00:53:06.635659 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.635663 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.635667 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.635671 | orchestrator | 2026-03-30 00:53:06.635675 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-30 00:53:06.635678 | orchestrator | Monday 30 March 2026 00:51:47 +0000 (0:00:01.924) 0:04:42.310 ********** 2026-03-30 00:53:06.635682 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.635686 | orchestrator | 2026-03-30 00:53:06.635690 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-30 00:53:06.635694 | orchestrator | Monday 30 March 2026 00:51:48 +0000 (0:00:01.433) 0:04:43.743 ********** 2026-03-30 00:53:06.635698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:53:06.635720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:53:06.635728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:53:06.635744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:53:06.635753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:53:06.635777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:53:06.635805 | orchestrator | 2026-03-30 00:53:06.635810 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-30 00:53:06.635813 | orchestrator | Monday 30 March 2026 00:51:53 +0000 (0:00:04.912) 0:04:48.656 ********** 2026-03-30 00:53:06.635818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:53:06.635848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:53:06.635853 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:53:06.635863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:53:06.635867 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:53:06.635899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:53:06.635904 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635908 | orchestrator | 2026-03-30 00:53:06.635911 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-30 00:53:06.635915 | orchestrator | Monday 30 March 2026 00:51:54 +0000 (0:00:00.830) 0:04:49.486 ********** 2026-03-30 00:53:06.635919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-30 00:53:06.635923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-30 00:53:06.635927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-30 00:53:06.635935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-30 00:53:06.635939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-30 00:53:06.635944 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.635948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-30 00:53:06.635952 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.635956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-30 00:53:06.635959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-30 00:53:06.635980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-30 00:53:06.635984 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.635988 | orchestrator | 2026-03-30 00:53:06.635992 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-30 00:53:06.635996 | orchestrator | Monday 30 March 2026 00:51:55 +0000 (0:00:01.051) 0:04:50.538 ********** 2026-03-30 00:53:06.636000 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636003 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636007 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636011 | orchestrator | 2026-03-30 00:53:06.636015 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-30 00:53:06.636018 | orchestrator | Monday 30 March 2026 00:51:55 +0000 (0:00:00.427) 0:04:50.965 ********** 2026-03-30 00:53:06.636022 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636026 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636029 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636033 | orchestrator | 2026-03-30 00:53:06.636037 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-30 00:53:06.636040 | orchestrator | Monday 30 March 2026 00:51:56 +0000 (0:00:01.118) 0:04:52.084 ********** 2026-03-30 00:53:06.636044 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.636048 | orchestrator | 2026-03-30 00:53:06.636052 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-30 00:53:06.636055 | orchestrator | Monday 30 March 2026 00:51:58 +0000 (0:00:01.495) 0:04:53.579 ********** 2026-03-30 00:53:06.636062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 00:53:06.636067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 00:53:06.636073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 00:53:06.636107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 00:53:06.636114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 00:53:06.636118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 00:53:06.636134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 00:53:06.636172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 00:53:06.636183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-30 00:53:06.636188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-30 00:53:06.636194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 00:53:06.636211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-30 00:53:06.636240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636256 | orchestrator | 2026-03-30 00:53:06.636260 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-30 00:53:06.636264 | orchestrator | Monday 30 March 2026 00:52:02 +0000 (0:00:04.071) 0:04:57.650 ********** 2026-03-30 00:53:06.636270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-30 00:53:06.636274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 00:53:06.636278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-30 00:53:06.636327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-30 00:53:06.636337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-30 00:53:06.636344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 00:53:06.636353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636387 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-30 00:53:06.636415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-30 00:53:06.636426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636446 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-30 00:53:06.636468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 00:53:06.636480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-30 00:53:06.636534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-30 00:53:06.636540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 00:53:06.636555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 00:53:06.636570 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636576 | orchestrator | 2026-03-30 00:53:06.636582 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-30 00:53:06.636588 | orchestrator | Monday 30 March 2026 00:52:03 +0000 (0:00:00.945) 0:04:58.596 ********** 2026-03-30 00:53:06.636595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-30 00:53:06.636603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-30 00:53:06.636611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-30 00:53:06.636618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-30 00:53:06.636625 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-30 00:53:06.636638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-30 00:53:06.636644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-30 00:53:06.636651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-30 00:53:06.636658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-30 00:53:06.636668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-30 00:53:06.636672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-30 00:53:06.636677 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-30 00:53:06.636685 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636693 | orchestrator | 2026-03-30 00:53:06.636697 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-30 00:53:06.636702 | orchestrator | Monday 30 March 2026 00:52:04 +0000 (0:00:01.288) 0:04:59.884 ********** 2026-03-30 00:53:06.636708 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636714 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636720 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636726 | orchestrator | 2026-03-30 00:53:06.636732 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-30 00:53:06.636742 | orchestrator | Monday 30 March 2026 00:52:05 +0000 (0:00:00.444) 0:05:00.329 ********** 2026-03-30 00:53:06.636748 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636754 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636760 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636766 | orchestrator | 2026-03-30 00:53:06.636773 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-30 00:53:06.636778 | orchestrator | Monday 30 March 2026 00:52:06 +0000 (0:00:01.120) 0:05:01.449 ********** 2026-03-30 00:53:06.636830 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.636835 | orchestrator | 2026-03-30 00:53:06.636839 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-30 00:53:06.636843 | orchestrator | Monday 30 March 2026 00:52:07 +0000 (0:00:01.304) 0:05:02.753 ********** 2026-03-30 00:53:06.636848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:53:06.636853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:53:06.636861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-30 00:53:06.636870 | orchestrator | 2026-03-30 00:53:06.636875 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-30 00:53:06.636878 | orchestrator | Monday 30 March 2026 00:52:09 +0000 (0:00:02.219) 0:05:04.972 ********** 2026-03-30 00:53:06.636885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-30 00:53:06.636889 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-30 00:53:06.636898 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-30 00:53:06.636906 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636909 | orchestrator | 2026-03-30 00:53:06.636913 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-30 00:53:06.636917 | orchestrator | Monday 30 March 2026 00:52:10 +0000 (0:00:00.366) 0:05:05.339 ********** 2026-03-30 00:53:06.636928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-30 00:53:06.636933 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-30 00:53:06.636941 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-30 00:53:06.636948 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636952 | orchestrator | 2026-03-30 00:53:06.636956 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-30 00:53:06.636960 | orchestrator | Monday 30 March 2026 00:52:10 +0000 (0:00:00.563) 0:05:05.903 ********** 2026-03-30 00:53:06.636963 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636967 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636971 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636975 | orchestrator | 2026-03-30 00:53:06.636978 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-30 00:53:06.636982 | orchestrator | Monday 30 March 2026 00:52:11 +0000 (0:00:00.800) 0:05:06.703 ********** 2026-03-30 00:53:06.636986 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.636990 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.636993 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.636997 | orchestrator | 2026-03-30 00:53:06.637001 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-30 00:53:06.637010 | orchestrator | Monday 30 March 2026 00:52:12 +0000 (0:00:01.467) 0:05:08.171 ********** 2026-03-30 00:53:06.637014 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:53:06.637018 | orchestrator | 2026-03-30 00:53:06.637022 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-30 00:53:06.637025 | orchestrator | Monday 30 March 2026 00:52:14 +0000 (0:00:01.557) 0:05:09.728 ********** 2026-03-30 00:53:06.637030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.637034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.637045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.637050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.637057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.637061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-30 00:53:06.637065 | orchestrator | 2026-03-30 00:53:06.637069 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-30 00:53:06.637076 | orchestrator | Monday 30 March 2026 00:52:20 +0000 (0:00:05.608) 0:05:15.336 ********** 2026-03-30 00:53:06.637082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.637087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.637091 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.637101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.637105 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.637120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-30 00:53:06.637124 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637128 | orchestrator | 2026-03-30 00:53:06.637132 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-30 00:53:06.637136 | orchestrator | Monday 30 March 2026 00:52:20 +0000 (0:00:00.900) 0:05:16.237 ********** 2026-03-30 00:53:06.637140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637159 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637182 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-30 00:53:06.637201 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637205 | orchestrator | 2026-03-30 00:53:06.637208 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-30 00:53:06.637212 | orchestrator | Monday 30 March 2026 00:52:21 +0000 (0:00:00.849) 0:05:17.086 ********** 2026-03-30 00:53:06.637216 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.637220 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.637224 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.637227 | orchestrator | 2026-03-30 00:53:06.637231 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-30 00:53:06.637235 | orchestrator | Monday 30 March 2026 00:52:23 +0000 (0:00:01.369) 0:05:18.455 ********** 2026-03-30 00:53:06.637241 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.637245 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.637249 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.637252 | orchestrator | 2026-03-30 00:53:06.637256 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-30 00:53:06.637260 | orchestrator | Monday 30 March 2026 00:52:25 +0000 (0:00:02.032) 0:05:20.488 ********** 2026-03-30 00:53:06.637264 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637268 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637271 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637275 | orchestrator | 2026-03-30 00:53:06.637279 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-30 00:53:06.637283 | orchestrator | Monday 30 March 2026 00:52:25 +0000 (0:00:00.458) 0:05:20.947 ********** 2026-03-30 00:53:06.637287 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637291 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637294 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637298 | orchestrator | 2026-03-30 00:53:06.637302 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-30 00:53:06.637306 | orchestrator | Monday 30 March 2026 00:52:25 +0000 (0:00:00.277) 0:05:21.225 ********** 2026-03-30 00:53:06.637310 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637313 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637317 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637321 | orchestrator | 2026-03-30 00:53:06.637325 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-30 00:53:06.637328 | orchestrator | Monday 30 March 2026 00:52:26 +0000 (0:00:00.289) 0:05:21.514 ********** 2026-03-30 00:53:06.637332 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637336 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637340 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637347 | orchestrator | 2026-03-30 00:53:06.637350 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-30 00:53:06.637357 | orchestrator | Monday 30 March 2026 00:52:26 +0000 (0:00:00.254) 0:05:21.768 ********** 2026-03-30 00:53:06.637361 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637365 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637369 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637372 | orchestrator | 2026-03-30 00:53:06.637376 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-30 00:53:06.637380 | orchestrator | Monday 30 March 2026 00:52:26 +0000 (0:00:00.469) 0:05:22.237 ********** 2026-03-30 00:53:06.637384 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637388 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637391 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637395 | orchestrator | 2026-03-30 00:53:06.637399 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-30 00:53:06.637403 | orchestrator | Monday 30 March 2026 00:52:27 +0000 (0:00:00.481) 0:05:22.719 ********** 2026-03-30 00:53:06.637406 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637411 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637414 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637418 | orchestrator | 2026-03-30 00:53:06.637422 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-30 00:53:06.637426 | orchestrator | Monday 30 March 2026 00:52:28 +0000 (0:00:00.709) 0:05:23.429 ********** 2026-03-30 00:53:06.637430 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637433 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637437 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637441 | orchestrator | 2026-03-30 00:53:06.637445 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-30 00:53:06.637449 | orchestrator | Monday 30 March 2026 00:52:28 +0000 (0:00:00.501) 0:05:23.930 ********** 2026-03-30 00:53:06.637453 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637456 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637460 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637464 | orchestrator | 2026-03-30 00:53:06.637468 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-30 00:53:06.637471 | orchestrator | Monday 30 March 2026 00:52:29 +0000 (0:00:01.027) 0:05:24.957 ********** 2026-03-30 00:53:06.637475 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637479 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637483 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637486 | orchestrator | 2026-03-30 00:53:06.637490 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-30 00:53:06.637494 | orchestrator | Monday 30 March 2026 00:52:30 +0000 (0:00:01.047) 0:05:26.005 ********** 2026-03-30 00:53:06.637498 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637502 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637505 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637509 | orchestrator | 2026-03-30 00:53:06.637513 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-30 00:53:06.637517 | orchestrator | Monday 30 March 2026 00:52:31 +0000 (0:00:00.838) 0:05:26.843 ********** 2026-03-30 00:53:06.637520 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.637524 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.637528 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.637532 | orchestrator | 2026-03-30 00:53:06.637536 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-30 00:53:06.637539 | orchestrator | Monday 30 March 2026 00:52:36 +0000 (0:00:04.561) 0:05:31.405 ********** 2026-03-30 00:53:06.637543 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637548 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637554 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637560 | orchestrator | 2026-03-30 00:53:06.637569 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-30 00:53:06.637585 | orchestrator | Monday 30 March 2026 00:52:39 +0000 (0:00:03.035) 0:05:34.440 ********** 2026-03-30 00:53:06.637590 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.637596 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.637601 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.637606 | orchestrator | 2026-03-30 00:53:06.637612 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-30 00:53:06.637618 | orchestrator | Monday 30 March 2026 00:52:52 +0000 (0:00:13.193) 0:05:47.634 ********** 2026-03-30 00:53:06.637624 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637633 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637639 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637645 | orchestrator | 2026-03-30 00:53:06.637650 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-30 00:53:06.637656 | orchestrator | Monday 30 March 2026 00:52:53 +0000 (0:00:00.708) 0:05:48.342 ********** 2026-03-30 00:53:06.637661 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:53:06.637667 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:53:06.637672 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:53:06.637679 | orchestrator | 2026-03-30 00:53:06.637684 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-30 00:53:06.637690 | orchestrator | Monday 30 March 2026 00:52:57 +0000 (0:00:04.239) 0:05:52.582 ********** 2026-03-30 00:53:06.637696 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637703 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637709 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637715 | orchestrator | 2026-03-30 00:53:06.637721 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-30 00:53:06.637727 | orchestrator | Monday 30 March 2026 00:52:57 +0000 (0:00:00.509) 0:05:53.091 ********** 2026-03-30 00:53:06.637733 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637742 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637749 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637755 | orchestrator | 2026-03-30 00:53:06.637762 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-30 00:53:06.637767 | orchestrator | Monday 30 March 2026 00:52:58 +0000 (0:00:00.302) 0:05:53.393 ********** 2026-03-30 00:53:06.637773 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637780 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637803 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637810 | orchestrator | 2026-03-30 00:53:06.637816 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-30 00:53:06.637826 | orchestrator | Monday 30 March 2026 00:52:58 +0000 (0:00:00.289) 0:05:53.683 ********** 2026-03-30 00:53:06.637832 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637838 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637844 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637851 | orchestrator | 2026-03-30 00:53:06.637857 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-30 00:53:06.637863 | orchestrator | Monday 30 March 2026 00:52:58 +0000 (0:00:00.311) 0:05:53.995 ********** 2026-03-30 00:53:06.637869 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637876 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637882 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637887 | orchestrator | 2026-03-30 00:53:06.637893 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-30 00:53:06.637900 | orchestrator | Monday 30 March 2026 00:52:59 +0000 (0:00:00.563) 0:05:54.558 ********** 2026-03-30 00:53:06.637909 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:53:06.637914 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:53:06.637920 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:53:06.637926 | orchestrator | 2026-03-30 00:53:06.637931 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-30 00:53:06.637936 | orchestrator | Monday 30 March 2026 00:52:59 +0000 (0:00:00.315) 0:05:54.873 ********** 2026-03-30 00:53:06.637949 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.637956 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637962 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637968 | orchestrator | 2026-03-30 00:53:06.637974 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-30 00:53:06.637980 | orchestrator | Monday 30 March 2026 00:53:04 +0000 (0:00:04.801) 0:05:59.674 ********** 2026-03-30 00:53:06.637986 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:53:06.637993 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:53:06.637999 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:53:06.638004 | orchestrator | 2026-03-30 00:53:06.638011 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:53:06.638078 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-30 00:53:06.638084 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-30 00:53:06.638087 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-30 00:53:06.638091 | orchestrator | 2026-03-30 00:53:06.638095 | orchestrator | 2026-03-30 00:53:06.638099 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:53:06.638102 | orchestrator | Monday 30 March 2026 00:53:05 +0000 (0:00:00.782) 0:06:00.457 ********** 2026-03-30 00:53:06.638106 | orchestrator | =============================================================================== 2026-03-30 00:53:06.638110 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.19s 2026-03-30 00:53:06.638114 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.44s 2026-03-30 00:53:06.638117 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.32s 2026-03-30 00:53:06.638121 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.89s 2026-03-30 00:53:06.638125 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.61s 2026-03-30 00:53:06.638129 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.05s 2026-03-30 00:53:06.638132 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.91s 2026-03-30 00:53:06.638136 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.80s 2026-03-30 00:53:06.638140 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.56s 2026-03-30 00:53:06.638149 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.35s 2026-03-30 00:53:06.638153 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.24s 2026-03-30 00:53:06.638157 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.16s 2026-03-30 00:53:06.638160 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.07s 2026-03-30 00:53:06.638164 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.96s 2026-03-30 00:53:06.638168 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.92s 2026-03-30 00:53:06.638172 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.69s 2026-03-30 00:53:06.638175 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.59s 2026-03-30 00:53:06.638179 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.58s 2026-03-30 00:53:06.638183 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.40s 2026-03-30 00:53:06.638186 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.33s 2026-03-30 00:53:06.638190 | orchestrator | 2026-03-30 00:53:06 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:06.638199 | orchestrator | 2026-03-30 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:09.659625 | orchestrator | 2026-03-30 00:53:09 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:09.662128 | orchestrator | 2026-03-30 00:53:09 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:09.663809 | orchestrator | 2026-03-30 00:53:09 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:09.664294 | orchestrator | 2026-03-30 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:12.698692 | orchestrator | 2026-03-30 00:53:12 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:12.701792 | orchestrator | 2026-03-30 00:53:12 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:12.707314 | orchestrator | 2026-03-30 00:53:12 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:12.707385 | orchestrator | 2026-03-30 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:15.762408 | orchestrator | 2026-03-30 00:53:15 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:15.762491 | orchestrator | 2026-03-30 00:53:15 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:15.763955 | orchestrator | 2026-03-30 00:53:15 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:15.763989 | orchestrator | 2026-03-30 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:18.784757 | orchestrator | 2026-03-30 00:53:18 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:18.784884 | orchestrator | 2026-03-30 00:53:18 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:18.785508 | orchestrator | 2026-03-30 00:53:18 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:18.785526 | orchestrator | 2026-03-30 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:21.812333 | orchestrator | 2026-03-30 00:53:21 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:21.814169 | orchestrator | 2026-03-30 00:53:21 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:21.815689 | orchestrator | 2026-03-30 00:53:21 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:21.815740 | orchestrator | 2026-03-30 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:24.837131 | orchestrator | 2026-03-30 00:53:24 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:24.839333 | orchestrator | 2026-03-30 00:53:24 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:24.841407 | orchestrator | 2026-03-30 00:53:24 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:24.841440 | orchestrator | 2026-03-30 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:27.878650 | orchestrator | 2026-03-30 00:53:27 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:27.878919 | orchestrator | 2026-03-30 00:53:27 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:27.879861 | orchestrator | 2026-03-30 00:53:27 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:27.879912 | orchestrator | 2026-03-30 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:30.914832 | orchestrator | 2026-03-30 00:53:30 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:30.915932 | orchestrator | 2026-03-30 00:53:30 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:30.916915 | orchestrator | 2026-03-30 00:53:30 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:30.917043 | orchestrator | 2026-03-30 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:33.980032 | orchestrator | 2026-03-30 00:53:33 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:33.986637 | orchestrator | 2026-03-30 00:53:33 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:33.996585 | orchestrator | 2026-03-30 00:53:33 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:34.001351 | orchestrator | 2026-03-30 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:37.072693 | orchestrator | 2026-03-30 00:53:37 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:37.073633 | orchestrator | 2026-03-30 00:53:37 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:37.075357 | orchestrator | 2026-03-30 00:53:37 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:37.075396 | orchestrator | 2026-03-30 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:40.119995 | orchestrator | 2026-03-30 00:53:40 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:40.120447 | orchestrator | 2026-03-30 00:53:40 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:40.122618 | orchestrator | 2026-03-30 00:53:40 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:40.122847 | orchestrator | 2026-03-30 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:43.152233 | orchestrator | 2026-03-30 00:53:43 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:43.153230 | orchestrator | 2026-03-30 00:53:43 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:43.153806 | orchestrator | 2026-03-30 00:53:43 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:43.154237 | orchestrator | 2026-03-30 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:46.197030 | orchestrator | 2026-03-30 00:53:46 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:46.198675 | orchestrator | 2026-03-30 00:53:46 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:46.201430 | orchestrator | 2026-03-30 00:53:46 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:46.202881 | orchestrator | 2026-03-30 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:49.248200 | orchestrator | 2026-03-30 00:53:49 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:49.250507 | orchestrator | 2026-03-30 00:53:49 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:49.252306 | orchestrator | 2026-03-30 00:53:49 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:49.252357 | orchestrator | 2026-03-30 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:52.302401 | orchestrator | 2026-03-30 00:53:52 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:52.308055 | orchestrator | 2026-03-30 00:53:52 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:52.308580 | orchestrator | 2026-03-30 00:53:52 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:52.309087 | orchestrator | 2026-03-30 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:55.341980 | orchestrator | 2026-03-30 00:53:55 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:55.342926 | orchestrator | 2026-03-30 00:53:55 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:55.345937 | orchestrator | 2026-03-30 00:53:55 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:55.345980 | orchestrator | 2026-03-30 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:53:58.405151 | orchestrator | 2026-03-30 00:53:58 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:53:58.405644 | orchestrator | 2026-03-30 00:53:58 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:53:58.407805 | orchestrator | 2026-03-30 00:53:58 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:53:58.407846 | orchestrator | 2026-03-30 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:01.454245 | orchestrator | 2026-03-30 00:54:01 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:01.456559 | orchestrator | 2026-03-30 00:54:01 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:01.457989 | orchestrator | 2026-03-30 00:54:01 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:01.458064 | orchestrator | 2026-03-30 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:04.501351 | orchestrator | 2026-03-30 00:54:04 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:04.502921 | orchestrator | 2026-03-30 00:54:04 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:04.504081 | orchestrator | 2026-03-30 00:54:04 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:04.504157 | orchestrator | 2026-03-30 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:07.536260 | orchestrator | 2026-03-30 00:54:07 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:07.538782 | orchestrator | 2026-03-30 00:54:07 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:07.540440 | orchestrator | 2026-03-30 00:54:07 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:07.540533 | orchestrator | 2026-03-30 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:10.594976 | orchestrator | 2026-03-30 00:54:10 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:10.596849 | orchestrator | 2026-03-30 00:54:10 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:10.599064 | orchestrator | 2026-03-30 00:54:10 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:10.599240 | orchestrator | 2026-03-30 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:13.646734 | orchestrator | 2026-03-30 00:54:13 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:13.649328 | orchestrator | 2026-03-30 00:54:13 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:13.651356 | orchestrator | 2026-03-30 00:54:13 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:13.651418 | orchestrator | 2026-03-30 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:16.695527 | orchestrator | 2026-03-30 00:54:16 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:16.697153 | orchestrator | 2026-03-30 00:54:16 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:16.699276 | orchestrator | 2026-03-30 00:54:16 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:16.699352 | orchestrator | 2026-03-30 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:19.734245 | orchestrator | 2026-03-30 00:54:19 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:19.734330 | orchestrator | 2026-03-30 00:54:19 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:19.734344 | orchestrator | 2026-03-30 00:54:19 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:19.734356 | orchestrator | 2026-03-30 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:22.777009 | orchestrator | 2026-03-30 00:54:22 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:22.778573 | orchestrator | 2026-03-30 00:54:22 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:22.780808 | orchestrator | 2026-03-30 00:54:22 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:22.780865 | orchestrator | 2026-03-30 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:25.833176 | orchestrator | 2026-03-30 00:54:25 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:25.835799 | orchestrator | 2026-03-30 00:54:25 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:25.837574 | orchestrator | 2026-03-30 00:54:25 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:25.837675 | orchestrator | 2026-03-30 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:28.888577 | orchestrator | 2026-03-30 00:54:28 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:28.890206 | orchestrator | 2026-03-30 00:54:28 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:28.891831 | orchestrator | 2026-03-30 00:54:28 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:28.891901 | orchestrator | 2026-03-30 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:31.935943 | orchestrator | 2026-03-30 00:54:31 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:31.937680 | orchestrator | 2026-03-30 00:54:31 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:31.939459 | orchestrator | 2026-03-30 00:54:31 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:31.939522 | orchestrator | 2026-03-30 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:34.992581 | orchestrator | 2026-03-30 00:54:34 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:34.993854 | orchestrator | 2026-03-30 00:54:34 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:34.996351 | orchestrator | 2026-03-30 00:54:34 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:34.996413 | orchestrator | 2026-03-30 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:38.047103 | orchestrator | 2026-03-30 00:54:38 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state STARTED 2026-03-30 00:54:38.048976 | orchestrator | 2026-03-30 00:54:38 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:38.053094 | orchestrator | 2026-03-30 00:54:38 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:38.053277 | orchestrator | 2026-03-30 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:41.117916 | orchestrator | 2026-03-30 00:54:41 | INFO  | Task e60d9f5e-5191-4ae4-88ea-8f19bd96a48f is in state SUCCESS 2026-03-30 00:54:41.119161 | orchestrator | 2026-03-30 00:54:41.119193 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-30 00:54:41.119199 | orchestrator | 2.16.14 2026-03-30 00:54:41.119204 | orchestrator | 2026-03-30 00:54:41.119209 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-30 00:54:41.119215 | orchestrator | 2026-03-30 00:54:41.119219 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-30 00:54:41.119225 | orchestrator | Monday 30 March 2026 00:44:35 +0000 (0:00:00.741) 0:00:00.741 ********** 2026-03-30 00:54:41.119230 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.119236 | orchestrator | 2026-03-30 00:54:41.119241 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-30 00:54:41.119245 | orchestrator | Monday 30 March 2026 00:44:36 +0000 (0:00:01.306) 0:00:02.047 ********** 2026-03-30 00:54:41.119250 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.119255 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.119260 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.119264 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.119269 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.119274 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.119278 | orchestrator | 2026-03-30 00:54:41.119283 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-30 00:54:41.119289 | orchestrator | Monday 30 March 2026 00:44:38 +0000 (0:00:01.780) 0:00:03.828 ********** 2026-03-30 00:54:41.119297 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.119305 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.119313 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.119319 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.119323 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.119328 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.119332 | orchestrator | 2026-03-30 00:54:41.119337 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-30 00:54:41.119342 | orchestrator | Monday 30 March 2026 00:44:39 +0000 (0:00:00.628) 0:00:04.456 ********** 2026-03-30 00:54:41.119346 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.119351 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.119355 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.119360 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.119364 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.119369 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.119373 | orchestrator | 2026-03-30 00:54:41.119378 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-30 00:54:41.119382 | orchestrator | Monday 30 March 2026 00:44:40 +0000 (0:00:01.104) 0:00:05.561 ********** 2026-03-30 00:54:41.119387 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.119395 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.119405 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.119432 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.119440 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.119448 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.119455 | orchestrator | 2026-03-30 00:54:41.119462 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-30 00:54:41.119469 | orchestrator | Monday 30 March 2026 00:44:41 +0000 (0:00:01.199) 0:00:06.760 ********** 2026-03-30 00:54:41.119560 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.119568 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.119575 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.119598 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.119606 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.119613 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.119620 | orchestrator | 2026-03-30 00:54:41.119664 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-30 00:54:41.119676 | orchestrator | Monday 30 March 2026 00:44:42 +0000 (0:00:00.700) 0:00:07.461 ********** 2026-03-30 00:54:41.119684 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.120160 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.120259 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.120267 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.120275 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.120282 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.120290 | orchestrator | 2026-03-30 00:54:41.120298 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-30 00:54:41.120306 | orchestrator | Monday 30 March 2026 00:44:43 +0000 (0:00:01.182) 0:00:08.644 ********** 2026-03-30 00:54:41.120314 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.120331 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.120339 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.120347 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.120355 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.120390 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.120395 | orchestrator | 2026-03-30 00:54:41.120400 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-30 00:54:41.120975 | orchestrator | Monday 30 March 2026 00:44:43 +0000 (0:00:00.637) 0:00:09.281 ********** 2026-03-30 00:54:41.120992 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.120997 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.121002 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.121006 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.121010 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.121015 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.121019 | orchestrator | 2026-03-30 00:54:41.121024 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-30 00:54:41.121029 | orchestrator | Monday 30 March 2026 00:44:44 +0000 (0:00:00.733) 0:00:10.015 ********** 2026-03-30 00:54:41.121033 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:54:41.121038 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:54:41.121043 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:54:41.121047 | orchestrator | 2026-03-30 00:54:41.121052 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-30 00:54:41.121056 | orchestrator | Monday 30 March 2026 00:44:45 +0000 (0:00:00.625) 0:00:10.640 ********** 2026-03-30 00:54:41.121061 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.121065 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.121070 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.121094 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.121099 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.121104 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.121108 | orchestrator | 2026-03-30 00:54:41.121113 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-30 00:54:41.121118 | orchestrator | Monday 30 March 2026 00:44:46 +0000 (0:00:00.730) 0:00:11.370 ********** 2026-03-30 00:54:41.121428 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:54:41.121520 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:54:41.121529 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:54:41.121536 | orchestrator | 2026-03-30 00:54:41.121544 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-30 00:54:41.121552 | orchestrator | Monday 30 March 2026 00:44:48 +0000 (0:00:02.132) 0:00:13.503 ********** 2026-03-30 00:54:41.121560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-30 00:54:41.121566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-30 00:54:41.121570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-30 00:54:41.121575 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.121613 | orchestrator | 2026-03-30 00:54:41.121618 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-30 00:54:41.121623 | orchestrator | Monday 30 March 2026 00:44:48 +0000 (0:00:00.678) 0:00:14.181 ********** 2026-03-30 00:54:41.121629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121635 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121645 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.121649 | orchestrator | 2026-03-30 00:54:41.121654 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-30 00:54:41.121659 | orchestrator | Monday 30 March 2026 00:44:50 +0000 (0:00:01.206) 0:00:15.388 ********** 2026-03-30 00:54:41.121664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121685 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.121690 | orchestrator | 2026-03-30 00:54:41.121694 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-30 00:54:41.121699 | orchestrator | Monday 30 March 2026 00:44:50 +0000 (0:00:00.155) 0:00:15.544 ********** 2026-03-30 00:54:41.121963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-30 00:44:46.583307', 'end': '2026-03-30 00:44:46.693147', 'delta': '0:00:00.109840', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-30 00:44:47.159736', 'end': '2026-03-30 00:44:47.251147', 'delta': '0:00:00.091411', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-30 00:44:47.871702', 'end': '2026-03-30 00:44:47.975446', 'delta': '0:00:00.103744', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.121986 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.121990 | orchestrator | 2026-03-30 00:54:41.121995 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-30 00:54:41.122000 | orchestrator | Monday 30 March 2026 00:44:50 +0000 (0:00:00.610) 0:00:16.154 ********** 2026-03-30 00:54:41.122005 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.122009 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.122054 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.122065 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.122073 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.122081 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.122089 | orchestrator | 2026-03-30 00:54:41.122097 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-30 00:54:41.122106 | orchestrator | Monday 30 March 2026 00:44:52 +0000 (0:00:01.998) 0:00:18.153 ********** 2026-03-30 00:54:41.122126 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.122131 | orchestrator | 2026-03-30 00:54:41.122135 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-30 00:54:41.122140 | orchestrator | Monday 30 March 2026 00:44:53 +0000 (0:00:00.993) 0:00:19.146 ********** 2026-03-30 00:54:41.122145 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122149 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122154 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122158 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122163 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122168 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122172 | orchestrator | 2026-03-30 00:54:41.122177 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-30 00:54:41.122188 | orchestrator | Monday 30 March 2026 00:44:55 +0000 (0:00:01.717) 0:00:20.864 ********** 2026-03-30 00:54:41.122192 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122197 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122205 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122209 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122214 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122219 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122223 | orchestrator | 2026-03-30 00:54:41.122228 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-30 00:54:41.122232 | orchestrator | Monday 30 March 2026 00:44:56 +0000 (0:00:01.439) 0:00:22.304 ********** 2026-03-30 00:54:41.122237 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122241 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122246 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122251 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122255 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122260 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122264 | orchestrator | 2026-03-30 00:54:41.122269 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-30 00:54:41.122273 | orchestrator | Monday 30 March 2026 00:44:58 +0000 (0:00:01.341) 0:00:23.646 ********** 2026-03-30 00:54:41.122278 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122283 | orchestrator | 2026-03-30 00:54:41.122287 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-30 00:54:41.122292 | orchestrator | Monday 30 March 2026 00:44:58 +0000 (0:00:00.108) 0:00:23.754 ********** 2026-03-30 00:54:41.122296 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122301 | orchestrator | 2026-03-30 00:54:41.122306 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-30 00:54:41.122320 | orchestrator | Monday 30 March 2026 00:44:58 +0000 (0:00:00.229) 0:00:23.983 ********** 2026-03-30 00:54:41.122325 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122335 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122340 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122386 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122394 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122406 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122419 | orchestrator | 2026-03-30 00:54:41.122428 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-30 00:54:41.122436 | orchestrator | Monday 30 March 2026 00:44:59 +0000 (0:00:00.565) 0:00:24.548 ********** 2026-03-30 00:54:41.122444 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122449 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122454 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122458 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122463 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122467 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122472 | orchestrator | 2026-03-30 00:54:41.122476 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-30 00:54:41.122481 | orchestrator | Monday 30 March 2026 00:45:00 +0000 (0:00:01.263) 0:00:25.812 ********** 2026-03-30 00:54:41.122485 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122490 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122494 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122499 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122503 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122508 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122512 | orchestrator | 2026-03-30 00:54:41.122517 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-30 00:54:41.122521 | orchestrator | Monday 30 March 2026 00:45:01 +0000 (0:00:00.891) 0:00:26.703 ********** 2026-03-30 00:54:41.122526 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122535 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122540 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122545 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122549 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122554 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122558 | orchestrator | 2026-03-30 00:54:41.122563 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-30 00:54:41.122567 | orchestrator | Monday 30 March 2026 00:45:02 +0000 (0:00:00.877) 0:00:27.581 ********** 2026-03-30 00:54:41.122572 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122576 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122595 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122599 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122604 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122608 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122613 | orchestrator | 2026-03-30 00:54:41.122617 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-30 00:54:41.122622 | orchestrator | Monday 30 March 2026 00:45:02 +0000 (0:00:00.731) 0:00:28.313 ********** 2026-03-30 00:54:41.122627 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122631 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122636 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122640 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122645 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122649 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122654 | orchestrator | 2026-03-30 00:54:41.122658 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-30 00:54:41.122663 | orchestrator | Monday 30 March 2026 00:45:03 +0000 (0:00:01.022) 0:00:29.335 ********** 2026-03-30 00:54:41.122667 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.122672 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.122676 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.122681 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.122685 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.122690 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.122694 | orchestrator | 2026-03-30 00:54:41.122699 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-30 00:54:41.122704 | orchestrator | Monday 30 March 2026 00:45:04 +0000 (0:00:00.945) 0:00:30.281 ********** 2026-03-30 00:54:41.122715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17', 'dm-uuid-LVM-VhndrP4JRm6lMg7AksZ6FMYg6vrongntBhq8Y3ZdFP38yXbqOmpgRG5EKvABQxIM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237', 'dm-uuid-LVM-clt1Fc1mc6DYo8CIRrVyGxkMSuH2Bqi8CEXbm2O1oeU38EcT3HRspLHVcLRRRQHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.122964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T52kc9-Ldma-uyoF-foMM-TKOt-Q6hL-lWcd0W', 'scsi-0QEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543', 'scsi-SQEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.122971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3NrJ0N-c5A3-kVeB-M2yt-WKbg-feCR-MQO7Hq', 'scsi-0QEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf', 'scsi-SQEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.122979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2', 'dm-uuid-LVM-7NSr7HCCIWNL8JT5s5DWeooLgm1tLA0wkqWH4bB8nx79cAMWo0Aep8fPbkrkd7aU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.122984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f', 'scsi-SQEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3', 'dm-uuid-LVM-6Z6bNPd3WmujtOY3ALBxsjXmhQv6S7FTLwE049uSuyJ2dYpnlJy7LWfD7MstxUzI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dwZ7EB-kVjX-n0aN-8G5X-2Diw-sf1q-CJJtQ3', 'scsi-0QEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a', 'scsi-SQEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123170 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.123178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cj2p5T-0MVX-qd8p-rpkg-503F-bgsJ-8eRiJ0', 'scsi-0QEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec', 'scsi-SQEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a', 'scsi-SQEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f', 'dm-uuid-LVM-2uuZCCeT9vVzXmKcCJCigXK8qGQm5Z9cANJbXZ2Z566G6B1fCFamf4KR5cElvaU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4', 'dm-uuid-LVM-kNEger4NY8CmZGRArGu8wpScmnkCU4EBN6oEYN0TVN8CaN3dJgrQY1Cm14otlkFv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123387 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.123395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b3ctSr-4ZBD-cg4d-Gfjf-hW1b-OTXp-B4dAW8', 'scsi-0QEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c', 'scsi-SQEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3HGldB-UGB3-nQU2-IT0R-Q7hS-Dpi6-YOBzBS', 'scsi-0QEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74', 'scsi-SQEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57', 'scsi-SQEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123645 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.123653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part1', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part14', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part15', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part16', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.123883 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.123890 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.123898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.123999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.124007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:54:41.124018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.124071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:54:41.124081 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.124089 | orchestrator | 2026-03-30 00:54:41.124096 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-30 00:54:41.124104 | orchestrator | Monday 30 March 2026 00:45:06 +0000 (0:00:01.391) 0:00:31.672 ********** 2026-03-30 00:54:41.124112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17', 'dm-uuid-LVM-VhndrP4JRm6lMg7AksZ6FMYg6vrongntBhq8Y3ZdFP38yXbqOmpgRG5EKvABQxIM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237', 'dm-uuid-LVM-clt1Fc1mc6DYo8CIRrVyGxkMSuH2Bqi8CEXbm2O1oeU38EcT3HRspLHVcLRRRQHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2', 'dm-uuid-LVM-7NSr7HCCIWNL8JT5s5DWeooLgm1tLA0wkqWH4bB8nx79cAMWo0Aep8fPbkrkd7aU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3', 'dm-uuid-LVM-6Z6bNPd3WmujtOY3ALBxsjXmhQv6S7FTLwE049uSuyJ2dYpnlJy7LWfD7MstxUzI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T52kc9-Ldma-uyoF-foMM-TKOt-Q6hL-lWcd0W', 'scsi-0QEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543', 'scsi-SQEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3NrJ0N-c5A3-kVeB-M2yt-WKbg-feCR-MQO7Hq', 'scsi-0QEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf', 'scsi-SQEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f', 'scsi-SQEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f', 'dm-uuid-LVM-2uuZCCeT9vVzXmKcCJCigXK8qGQm5Z9cANJbXZ2Z566G6B1fCFamf4KR5cElvaU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4', 'dm-uuid-LVM-kNEger4NY8CmZGRArGu8wpScmnkCU4EBN6oEYN0TVN8CaN3dJgrQY1Cm14otlkFv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124679 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124698 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.124759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dwZ7EB-kVjX-n0aN-8G5X-2Diw-sf1q-CJJtQ3', 'scsi-0QEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a', 'scsi-SQEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cj2p5T-0MVX-qd8p-rpkg-503F-bgsJ-8eRiJ0', 'scsi-0QEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec', 'scsi-SQEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b3ctSr-4ZBD-cg4d-Gfjf-hW1b-OTXp-B4dAW8', 'scsi-0QEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c', 'scsi-SQEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a', 'scsi-SQEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.124971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3HGldB-UGB3-nQU2-IT0R-Q7hS-Dpi6-YOBzBS', 'scsi-0QEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74', 'scsi-SQEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125004 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125016 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57', 'scsi-SQEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125064 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125086 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125094 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125102 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125121 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.125129 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125197 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125258 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e75c076-35fa-416f-b046-253c4346d0dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125269 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125283 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125291 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125299 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125309 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125317 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125365 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125379 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125392 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part1', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part14', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part15', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part16', 'scsi-SQEMU_QEMU_HARDDISK_000e91a4-99ec-4ebf-a015-8c98def25257-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125409 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125461 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.125470 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.125477 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.125485 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125493 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125509 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125517 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125529 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125537 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125618 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125630 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125643 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_0349b975-80be-4625-9fdc-e308e57655f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125652 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:54:41.125664 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.125672 | orchestrator | 2026-03-30 00:54:41.125734 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-30 00:54:41.125744 | orchestrator | Monday 30 March 2026 00:45:07 +0000 (0:00:01.084) 0:00:32.757 ********** 2026-03-30 00:54:41.125752 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.125760 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.125768 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.125776 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.125783 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.125790 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.125798 | orchestrator | 2026-03-30 00:54:41.125806 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-30 00:54:41.125813 | orchestrator | Monday 30 March 2026 00:45:08 +0000 (0:00:01.263) 0:00:34.021 ********** 2026-03-30 00:54:41.125821 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.125828 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.125836 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.125843 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.125850 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.125858 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.125865 | orchestrator | 2026-03-30 00:54:41.125873 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-30 00:54:41.125880 | orchestrator | Monday 30 March 2026 00:45:09 +0000 (0:00:01.080) 0:00:35.102 ********** 2026-03-30 00:54:41.125888 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.125895 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.125903 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.125910 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.125918 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.125926 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.125931 | orchestrator | 2026-03-30 00:54:41.125936 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-30 00:54:41.125941 | orchestrator | Monday 30 March 2026 00:45:11 +0000 (0:00:01.441) 0:00:36.543 ********** 2026-03-30 00:54:41.125945 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.125950 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.125954 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.125959 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.125963 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.125968 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.125972 | orchestrator | 2026-03-30 00:54:41.125977 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-30 00:54:41.125981 | orchestrator | Monday 30 March 2026 00:45:11 +0000 (0:00:00.640) 0:00:37.184 ********** 2026-03-30 00:54:41.125986 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.125991 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.125995 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126000 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126011 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126040 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126044 | orchestrator | 2026-03-30 00:54:41.126049 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-30 00:54:41.126054 | orchestrator | Monday 30 March 2026 00:45:12 +0000 (0:00:01.005) 0:00:38.189 ********** 2026-03-30 00:54:41.126058 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126063 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.126072 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126077 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126081 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126086 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126090 | orchestrator | 2026-03-30 00:54:41.126095 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-30 00:54:41.126099 | orchestrator | Monday 30 March 2026 00:45:13 +0000 (0:00:00.757) 0:00:38.946 ********** 2026-03-30 00:54:41.126104 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-30 00:54:41.126109 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-30 00:54:41.126113 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-30 00:54:41.126118 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-30 00:54:41.126122 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-30 00:54:41.126127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-30 00:54:41.126131 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-30 00:54:41.126136 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-30 00:54:41.126143 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-30 00:54:41.126148 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-30 00:54:41.126153 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-30 00:54:41.126157 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-30 00:54:41.126161 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-30 00:54:41.126166 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-30 00:54:41.126170 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-30 00:54:41.126175 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-30 00:54:41.126179 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-30 00:54:41.126184 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-30 00:54:41.126188 | orchestrator | 2026-03-30 00:54:41.126193 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-30 00:54:41.126197 | orchestrator | Monday 30 March 2026 00:45:17 +0000 (0:00:04.262) 0:00:43.209 ********** 2026-03-30 00:54:41.126202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-30 00:54:41.126207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-30 00:54:41.126211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-30 00:54:41.126215 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126220 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-30 00:54:41.126224 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-30 00:54:41.126229 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-30 00:54:41.126233 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.126238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-30 00:54:41.126259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-30 00:54:41.126265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-30 00:54:41.126269 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-30 00:54:41.126278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-30 00:54:41.126283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-30 00:54:41.126287 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126292 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-30 00:54:41.126296 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-30 00:54:41.126301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-30 00:54:41.126305 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126315 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-30 00:54:41.126320 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-30 00:54:41.126324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-30 00:54:41.126328 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126333 | orchestrator | 2026-03-30 00:54:41.126337 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-30 00:54:41.126342 | orchestrator | Monday 30 March 2026 00:45:18 +0000 (0:00:01.015) 0:00:44.224 ********** 2026-03-30 00:54:41.126347 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126351 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126356 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126362 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.126367 | orchestrator | 2026-03-30 00:54:41.126372 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-30 00:54:41.126378 | orchestrator | Monday 30 March 2026 00:45:20 +0000 (0:00:01.580) 0:00:45.804 ********** 2026-03-30 00:54:41.126383 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126388 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.126393 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126398 | orchestrator | 2026-03-30 00:54:41.126403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-30 00:54:41.126409 | orchestrator | Monday 30 March 2026 00:45:20 +0000 (0:00:00.377) 0:00:46.182 ********** 2026-03-30 00:54:41.126414 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126420 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.126425 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126430 | orchestrator | 2026-03-30 00:54:41.126435 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-30 00:54:41.126440 | orchestrator | Monday 30 March 2026 00:45:21 +0000 (0:00:00.296) 0:00:46.479 ********** 2026-03-30 00:54:41.126445 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126450 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.126455 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126460 | orchestrator | 2026-03-30 00:54:41.126466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-30 00:54:41.126471 | orchestrator | Monday 30 March 2026 00:45:21 +0000 (0:00:00.371) 0:00:46.850 ********** 2026-03-30 00:54:41.126476 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.126481 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.126486 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.126492 | orchestrator | 2026-03-30 00:54:41.126497 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-30 00:54:41.126502 | orchestrator | Monday 30 March 2026 00:45:22 +0000 (0:00:01.237) 0:00:48.088 ********** 2026-03-30 00:54:41.126506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.126511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.126515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.126520 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126524 | orchestrator | 2026-03-30 00:54:41.126531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-30 00:54:41.126536 | orchestrator | Monday 30 March 2026 00:45:23 +0000 (0:00:00.535) 0:00:48.623 ********** 2026-03-30 00:54:41.126540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.126545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.126549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.126554 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126558 | orchestrator | 2026-03-30 00:54:41.126563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-30 00:54:41.126570 | orchestrator | Monday 30 March 2026 00:45:23 +0000 (0:00:00.454) 0:00:49.077 ********** 2026-03-30 00:54:41.126575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.126594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.126602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.126609 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126617 | orchestrator | 2026-03-30 00:54:41.126625 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-30 00:54:41.126633 | orchestrator | Monday 30 March 2026 00:45:24 +0000 (0:00:00.326) 0:00:49.404 ********** 2026-03-30 00:54:41.126638 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.126643 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.126647 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.126652 | orchestrator | 2026-03-30 00:54:41.126656 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-30 00:54:41.126661 | orchestrator | Monday 30 March 2026 00:45:24 +0000 (0:00:00.439) 0:00:49.843 ********** 2026-03-30 00:54:41.126665 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-30 00:54:41.126670 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-30 00:54:41.126689 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-30 00:54:41.126694 | orchestrator | 2026-03-30 00:54:41.126699 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-30 00:54:41.126703 | orchestrator | Monday 30 March 2026 00:45:25 +0000 (0:00:01.136) 0:00:50.979 ********** 2026-03-30 00:54:41.126708 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:54:41.126713 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:54:41.126717 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:54:41.126722 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-30 00:54:41.126726 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-30 00:54:41.126731 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-30 00:54:41.126735 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-30 00:54:41.126740 | orchestrator | 2026-03-30 00:54:41.126745 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-30 00:54:41.126749 | orchestrator | Monday 30 March 2026 00:45:26 +0000 (0:00:00.960) 0:00:51.940 ********** 2026-03-30 00:54:41.126754 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:54:41.126758 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:54:41.126763 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:54:41.126767 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-30 00:54:41.126772 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-30 00:54:41.126776 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-30 00:54:41.126781 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-30 00:54:41.126785 | orchestrator | 2026-03-30 00:54:41.126790 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.126794 | orchestrator | Monday 30 March 2026 00:45:28 +0000 (0:00:01.892) 0:00:53.833 ********** 2026-03-30 00:54:41.126799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.126805 | orchestrator | 2026-03-30 00:54:41.126809 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.126821 | orchestrator | Monday 30 March 2026 00:45:29 +0000 (0:00:01.291) 0:00:55.124 ********** 2026-03-30 00:54:41.126825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.126830 | orchestrator | 2026-03-30 00:54:41.126835 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.126839 | orchestrator | Monday 30 March 2026 00:45:30 +0000 (0:00:01.205) 0:00:56.330 ********** 2026-03-30 00:54:41.126844 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.126848 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.126853 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.126858 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.126862 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.126867 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.126871 | orchestrator | 2026-03-30 00:54:41.126876 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.126880 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:01.279) 0:00:57.609 ********** 2026-03-30 00:54:41.126885 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126889 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126896 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.126901 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126905 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.126910 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.126914 | orchestrator | 2026-03-30 00:54:41.126919 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.126923 | orchestrator | Monday 30 March 2026 00:45:32 +0000 (0:00:00.689) 0:00:58.299 ********** 2026-03-30 00:54:41.126928 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.126932 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126937 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.126941 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126946 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.126950 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126955 | orchestrator | 2026-03-30 00:54:41.126959 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.126964 | orchestrator | Monday 30 March 2026 00:45:34 +0000 (0:00:01.436) 0:00:59.736 ********** 2026-03-30 00:54:41.126968 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.126973 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.126977 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.126982 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.126986 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.126991 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.126996 | orchestrator | 2026-03-30 00:54:41.127000 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.127005 | orchestrator | Monday 30 March 2026 00:45:36 +0000 (0:00:01.803) 0:01:01.540 ********** 2026-03-30 00:54:41.127009 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127014 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127018 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127023 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127027 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127045 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127050 | orchestrator | 2026-03-30 00:54:41.127055 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.127059 | orchestrator | Monday 30 March 2026 00:45:37 +0000 (0:00:01.251) 0:01:02.791 ********** 2026-03-30 00:54:41.127064 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127068 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127073 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127078 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127082 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127087 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127095 | orchestrator | 2026-03-30 00:54:41.127099 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.127104 | orchestrator | Monday 30 March 2026 00:45:38 +0000 (0:00:01.044) 0:01:03.836 ********** 2026-03-30 00:54:41.127108 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127113 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127117 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127122 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127126 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127130 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127135 | orchestrator | 2026-03-30 00:54:41.127140 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.127144 | orchestrator | Monday 30 March 2026 00:45:39 +0000 (0:00:00.761) 0:01:04.597 ********** 2026-03-30 00:54:41.127149 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127153 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127158 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127162 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127167 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127171 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127176 | orchestrator | 2026-03-30 00:54:41.127180 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.127185 | orchestrator | Monday 30 March 2026 00:45:40 +0000 (0:00:01.670) 0:01:06.268 ********** 2026-03-30 00:54:41.127189 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127194 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127198 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127203 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127207 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127211 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127216 | orchestrator | 2026-03-30 00:54:41.127220 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.127225 | orchestrator | Monday 30 March 2026 00:45:42 +0000 (0:00:01.297) 0:01:07.566 ********** 2026-03-30 00:54:41.127229 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127234 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127238 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127243 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127248 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127252 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127256 | orchestrator | 2026-03-30 00:54:41.127261 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.127266 | orchestrator | Monday 30 March 2026 00:45:43 +0000 (0:00:01.435) 0:01:09.001 ********** 2026-03-30 00:54:41.127270 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127275 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127279 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127284 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127288 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127293 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127297 | orchestrator | 2026-03-30 00:54:41.127302 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.127306 | orchestrator | Monday 30 March 2026 00:45:44 +0000 (0:00:00.995) 0:01:09.997 ********** 2026-03-30 00:54:41.127311 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127315 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127320 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127324 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127329 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127333 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127338 | orchestrator | 2026-03-30 00:54:41.127342 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.127347 | orchestrator | Monday 30 March 2026 00:45:45 +0000 (0:00:00.699) 0:01:10.696 ********** 2026-03-30 00:54:41.127354 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127361 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127365 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127370 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127374 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127379 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127384 | orchestrator | 2026-03-30 00:54:41.127388 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.127393 | orchestrator | Monday 30 March 2026 00:45:45 +0000 (0:00:00.580) 0:01:11.277 ********** 2026-03-30 00:54:41.127397 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127402 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127406 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127411 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127415 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127420 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127424 | orchestrator | 2026-03-30 00:54:41.127429 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.127434 | orchestrator | Monday 30 March 2026 00:45:46 +0000 (0:00:00.656) 0:01:11.934 ********** 2026-03-30 00:54:41.127438 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127443 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127447 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127451 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127456 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127460 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127465 | orchestrator | 2026-03-30 00:54:41.127470 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.127474 | orchestrator | Monday 30 March 2026 00:45:47 +0000 (0:00:00.523) 0:01:12.458 ********** 2026-03-30 00:54:41.127479 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127483 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127488 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127492 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127516 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127528 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127535 | orchestrator | 2026-03-30 00:54:41.127542 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.127550 | orchestrator | Monday 30 March 2026 00:45:47 +0000 (0:00:00.752) 0:01:13.210 ********** 2026-03-30 00:54:41.127557 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127564 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127571 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127610 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127619 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127626 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127633 | orchestrator | 2026-03-30 00:54:41.127640 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.127648 | orchestrator | Monday 30 March 2026 00:45:48 +0000 (0:00:00.797) 0:01:14.008 ********** 2026-03-30 00:54:41.127656 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127664 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127671 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127679 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127684 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127689 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127693 | orchestrator | 2026-03-30 00:54:41.127698 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.127703 | orchestrator | Monday 30 March 2026 00:45:49 +0000 (0:00:00.736) 0:01:14.744 ********** 2026-03-30 00:54:41.127707 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.127712 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.127716 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.127721 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.127725 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.127734 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.127739 | orchestrator | 2026-03-30 00:54:41.127744 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-30 00:54:41.127748 | orchestrator | Monday 30 March 2026 00:45:50 +0000 (0:00:01.467) 0:01:16.212 ********** 2026-03-30 00:54:41.127753 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.127757 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.127762 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.127766 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.127771 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.127775 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.127780 | orchestrator | 2026-03-30 00:54:41.127784 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-30 00:54:41.127789 | orchestrator | Monday 30 March 2026 00:45:52 +0000 (0:00:01.700) 0:01:17.912 ********** 2026-03-30 00:54:41.127793 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.127798 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.127802 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.127807 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.127811 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.127816 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.127820 | orchestrator | 2026-03-30 00:54:41.127825 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-30 00:54:41.127829 | orchestrator | Monday 30 March 2026 00:45:55 +0000 (0:00:02.510) 0:01:20.422 ********** 2026-03-30 00:54:41.127834 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.127838 | orchestrator | 2026-03-30 00:54:41.127843 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-30 00:54:41.127847 | orchestrator | Monday 30 March 2026 00:45:56 +0000 (0:00:00.989) 0:01:21.412 ********** 2026-03-30 00:54:41.127852 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127856 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127861 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127865 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127870 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127874 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127879 | orchestrator | 2026-03-30 00:54:41.127883 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-30 00:54:41.127888 | orchestrator | Monday 30 March 2026 00:45:56 +0000 (0:00:00.491) 0:01:21.903 ********** 2026-03-30 00:54:41.127895 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.127900 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.127904 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.127909 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.127913 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.127918 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.127922 | orchestrator | 2026-03-30 00:54:41.127927 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-30 00:54:41.127931 | orchestrator | Monday 30 March 2026 00:45:57 +0000 (0:00:00.630) 0:01:22.534 ********** 2026-03-30 00:54:41.127936 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-30 00:54:41.127940 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-30 00:54:41.127945 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-30 00:54:41.127949 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-30 00:54:41.127954 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-30 00:54:41.127958 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-30 00:54:41.127963 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-30 00:54:41.127970 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-30 00:54:41.127975 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-30 00:54:41.127979 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-30 00:54:41.128004 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-30 00:54:41.128009 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-30 00:54:41.128014 | orchestrator | 2026-03-30 00:54:41.128018 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-30 00:54:41.128023 | orchestrator | Monday 30 March 2026 00:45:58 +0000 (0:00:01.207) 0:01:23.741 ********** 2026-03-30 00:54:41.128027 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.128032 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.128036 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.128041 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.128045 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.128050 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.128054 | orchestrator | 2026-03-30 00:54:41.128059 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-30 00:54:41.128063 | orchestrator | Monday 30 March 2026 00:45:59 +0000 (0:00:01.136) 0:01:24.878 ********** 2026-03-30 00:54:41.128068 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128072 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128077 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128081 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128086 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128090 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128095 | orchestrator | 2026-03-30 00:54:41.128099 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-30 00:54:41.128104 | orchestrator | Monday 30 March 2026 00:46:00 +0000 (0:00:00.548) 0:01:25.426 ********** 2026-03-30 00:54:41.128108 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128113 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128117 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128122 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128126 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128131 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128135 | orchestrator | 2026-03-30 00:54:41.128140 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-30 00:54:41.128144 | orchestrator | Monday 30 March 2026 00:46:00 +0000 (0:00:00.753) 0:01:26.179 ********** 2026-03-30 00:54:41.128149 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128153 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128158 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128162 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128167 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128171 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128176 | orchestrator | 2026-03-30 00:54:41.128180 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-30 00:54:41.128185 | orchestrator | Monday 30 March 2026 00:46:01 +0000 (0:00:00.461) 0:01:26.640 ********** 2026-03-30 00:54:41.128189 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.128194 | orchestrator | 2026-03-30 00:54:41.128199 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-30 00:54:41.128203 | orchestrator | Monday 30 March 2026 00:46:02 +0000 (0:00:00.979) 0:01:27.620 ********** 2026-03-30 00:54:41.128208 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.128212 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.128220 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.128224 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.128229 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.128233 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.128237 | orchestrator | 2026-03-30 00:54:41.128242 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-30 00:54:41.128247 | orchestrator | Monday 30 March 2026 00:47:07 +0000 (0:01:05.701) 0:02:33.321 ********** 2026-03-30 00:54:41.128251 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-30 00:54:41.128256 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-30 00:54:41.128260 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-30 00:54:41.128267 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128272 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-30 00:54:41.128276 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-30 00:54:41.128281 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-30 00:54:41.128285 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128290 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-30 00:54:41.128294 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-30 00:54:41.128299 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-30 00:54:41.128303 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-30 00:54:41.128308 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-30 00:54:41.128312 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-30 00:54:41.128317 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128321 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-30 00:54:41.128326 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-30 00:54:41.128331 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-30 00:54:41.128335 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128340 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128357 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-30 00:54:41.128362 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-30 00:54:41.128367 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-30 00:54:41.128371 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128376 | orchestrator | 2026-03-30 00:54:41.128380 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-30 00:54:41.128385 | orchestrator | Monday 30 March 2026 00:47:08 +0000 (0:00:00.586) 0:02:33.908 ********** 2026-03-30 00:54:41.128389 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128394 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128398 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128403 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128407 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128412 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128416 | orchestrator | 2026-03-30 00:54:41.128421 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-30 00:54:41.128426 | orchestrator | Monday 30 March 2026 00:47:09 +0000 (0:00:00.622) 0:02:34.530 ********** 2026-03-30 00:54:41.128430 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128435 | orchestrator | 2026-03-30 00:54:41.128439 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-30 00:54:41.128444 | orchestrator | Monday 30 March 2026 00:47:09 +0000 (0:00:00.099) 0:02:34.629 ********** 2026-03-30 00:54:41.128451 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128456 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128460 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128465 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128469 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128474 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128478 | orchestrator | 2026-03-30 00:54:41.128483 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-30 00:54:41.128487 | orchestrator | Monday 30 March 2026 00:47:09 +0000 (0:00:00.655) 0:02:35.285 ********** 2026-03-30 00:54:41.128492 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128496 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128501 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128505 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128510 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128514 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128519 | orchestrator | 2026-03-30 00:54:41.128523 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-30 00:54:41.128528 | orchestrator | Monday 30 March 2026 00:47:10 +0000 (0:00:00.866) 0:02:36.151 ********** 2026-03-30 00:54:41.128533 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128537 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128542 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128546 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128551 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128555 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128560 | orchestrator | 2026-03-30 00:54:41.128564 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-30 00:54:41.128569 | orchestrator | Monday 30 March 2026 00:47:11 +0000 (0:00:00.788) 0:02:36.939 ********** 2026-03-30 00:54:41.128574 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.128590 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.128595 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.128599 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.128604 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.128608 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.128613 | orchestrator | 2026-03-30 00:54:41.128617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-30 00:54:41.128622 | orchestrator | Monday 30 March 2026 00:47:14 +0000 (0:00:02.898) 0:02:39.838 ********** 2026-03-30 00:54:41.128626 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.128631 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.128635 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.128640 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.128644 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.128649 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.128653 | orchestrator | 2026-03-30 00:54:41.128658 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-30 00:54:41.128664 | orchestrator | Monday 30 March 2026 00:47:15 +0000 (0:00:00.632) 0:02:40.470 ********** 2026-03-30 00:54:41.128669 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.128674 | orchestrator | 2026-03-30 00:54:41.128679 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-30 00:54:41.128683 | orchestrator | Monday 30 March 2026 00:47:16 +0000 (0:00:01.571) 0:02:42.041 ********** 2026-03-30 00:54:41.128688 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128693 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128697 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128702 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128706 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128711 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128718 | orchestrator | 2026-03-30 00:54:41.128722 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-30 00:54:41.128727 | orchestrator | Monday 30 March 2026 00:47:17 +0000 (0:00:00.589) 0:02:42.630 ********** 2026-03-30 00:54:41.128731 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128736 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128740 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128745 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128749 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128754 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128758 | orchestrator | 2026-03-30 00:54:41.128763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-30 00:54:41.128767 | orchestrator | Monday 30 March 2026 00:47:18 +0000 (0:00:00.816) 0:02:43.447 ********** 2026-03-30 00:54:41.128772 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128776 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128794 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128799 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128804 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128808 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128813 | orchestrator | 2026-03-30 00:54:41.128817 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-30 00:54:41.128822 | orchestrator | Monday 30 March 2026 00:47:18 +0000 (0:00:00.752) 0:02:44.199 ********** 2026-03-30 00:54:41.128826 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128831 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128836 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128840 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128844 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128849 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128853 | orchestrator | 2026-03-30 00:54:41.128858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-30 00:54:41.128862 | orchestrator | Monday 30 March 2026 00:47:19 +0000 (0:00:00.762) 0:02:44.962 ********** 2026-03-30 00:54:41.128867 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128871 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128876 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128880 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128885 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128889 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128894 | orchestrator | 2026-03-30 00:54:41.128898 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-30 00:54:41.128903 | orchestrator | Monday 30 March 2026 00:47:20 +0000 (0:00:00.718) 0:02:45.681 ********** 2026-03-30 00:54:41.128907 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128912 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128916 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128921 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128925 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128929 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128934 | orchestrator | 2026-03-30 00:54:41.128938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-30 00:54:41.128943 | orchestrator | Monday 30 March 2026 00:47:21 +0000 (0:00:00.903) 0:02:46.585 ********** 2026-03-30 00:54:41.128947 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128952 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128956 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.128961 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.128965 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.128970 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.128974 | orchestrator | 2026-03-30 00:54:41.128979 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-30 00:54:41.128986 | orchestrator | Monday 30 March 2026 00:47:21 +0000 (0:00:00.625) 0:02:47.210 ********** 2026-03-30 00:54:41.128991 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.128995 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.128999 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129004 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129008 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129013 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129017 | orchestrator | 2026-03-30 00:54:41.129022 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-30 00:54:41.129026 | orchestrator | Monday 30 March 2026 00:47:22 +0000 (0:00:00.740) 0:02:47.950 ********** 2026-03-30 00:54:41.129031 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.129035 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.129040 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.129044 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.129049 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.129053 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.129058 | orchestrator | 2026-03-30 00:54:41.129062 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-30 00:54:41.129067 | orchestrator | Monday 30 March 2026 00:47:23 +0000 (0:00:01.372) 0:02:49.323 ********** 2026-03-30 00:54:41.129071 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.129076 | orchestrator | 2026-03-30 00:54:41.129080 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-30 00:54:41.129087 | orchestrator | Monday 30 March 2026 00:47:24 +0000 (0:00:00.962) 0:02:50.286 ********** 2026-03-30 00:54:41.129092 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-30 00:54:41.129096 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-30 00:54:41.129101 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-30 00:54:41.129105 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-30 00:54:41.129110 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-30 00:54:41.129114 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-30 00:54:41.129119 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-30 00:54:41.129123 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-30 00:54:41.129128 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-30 00:54:41.129132 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-30 00:54:41.129137 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-30 00:54:41.129141 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-30 00:54:41.129146 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-30 00:54:41.129150 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-30 00:54:41.129155 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-30 00:54:41.129159 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-30 00:54:41.129164 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-30 00:54:41.129168 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-30 00:54:41.129185 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-30 00:54:41.129190 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-30 00:54:41.129195 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-30 00:54:41.129199 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-30 00:54:41.129204 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-30 00:54:41.129208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-30 00:54:41.129213 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-30 00:54:41.129222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-30 00:54:41.129227 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-30 00:54:41.129231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-30 00:54:41.129235 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-30 00:54:41.129240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-30 00:54:41.129244 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-30 00:54:41.129249 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-30 00:54:41.129253 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-30 00:54:41.129258 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-30 00:54:41.129262 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-30 00:54:41.129267 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-30 00:54:41.129271 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-30 00:54:41.129276 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-30 00:54:41.129280 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-30 00:54:41.129285 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-30 00:54:41.129289 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-30 00:54:41.129294 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-30 00:54:41.129298 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-30 00:54:41.129303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-30 00:54:41.129307 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-30 00:54:41.129312 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-30 00:54:41.129316 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-30 00:54:41.129321 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-30 00:54:41.129325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-30 00:54:41.129330 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-30 00:54:41.129334 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-30 00:54:41.129339 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-30 00:54:41.129343 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-30 00:54:41.129348 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-30 00:54:41.129353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-30 00:54:41.129357 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-30 00:54:41.129362 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-30 00:54:41.129366 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-30 00:54:41.129371 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-30 00:54:41.129375 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-30 00:54:41.129382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-30 00:54:41.129386 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-30 00:54:41.129391 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-30 00:54:41.129396 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-30 00:54:41.129400 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-30 00:54:41.129405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-30 00:54:41.129412 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-30 00:54:41.129416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-30 00:54:41.129421 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-30 00:54:41.129425 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-30 00:54:41.129430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-30 00:54:41.129434 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-30 00:54:41.129439 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-30 00:54:41.129443 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-30 00:54:41.129447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-30 00:54:41.129452 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-30 00:54:41.129468 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-30 00:54:41.129474 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-30 00:54:41.129478 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-30 00:54:41.129483 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-30 00:54:41.129487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-30 00:54:41.129492 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-30 00:54:41.129496 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-30 00:54:41.129500 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-30 00:54:41.129505 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-30 00:54:41.129510 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-30 00:54:41.129514 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-30 00:54:41.129519 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-30 00:54:41.129523 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-30 00:54:41.129527 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-30 00:54:41.129532 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-30 00:54:41.129536 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-30 00:54:41.129541 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-30 00:54:41.129545 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-30 00:54:41.129550 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-30 00:54:41.129554 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-30 00:54:41.129559 | orchestrator | 2026-03-30 00:54:41.129563 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-30 00:54:41.129568 | orchestrator | Monday 30 March 2026 00:47:32 +0000 (0:00:07.130) 0:02:57.417 ********** 2026-03-30 00:54:41.129572 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129577 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129589 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129594 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.129599 | orchestrator | 2026-03-30 00:54:41.129603 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-30 00:54:41.129608 | orchestrator | Monday 30 March 2026 00:47:33 +0000 (0:00:01.141) 0:02:58.558 ********** 2026-03-30 00:54:41.129612 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.129617 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.129625 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.129630 | orchestrator | 2026-03-30 00:54:41.129634 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-30 00:54:41.129639 | orchestrator | Monday 30 March 2026 00:47:33 +0000 (0:00:00.683) 0:02:59.242 ********** 2026-03-30 00:54:41.129643 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.129648 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.129654 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.129659 | orchestrator | 2026-03-30 00:54:41.129664 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-30 00:54:41.129668 | orchestrator | Monday 30 March 2026 00:47:35 +0000 (0:00:01.336) 0:03:00.578 ********** 2026-03-30 00:54:41.129673 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.129677 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.129682 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.129686 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129691 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129695 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129700 | orchestrator | 2026-03-30 00:54:41.129704 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-30 00:54:41.129709 | orchestrator | Monday 30 March 2026 00:47:35 +0000 (0:00:00.505) 0:03:01.084 ********** 2026-03-30 00:54:41.129713 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.129718 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.129722 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.129727 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129731 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129736 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129740 | orchestrator | 2026-03-30 00:54:41.129745 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-30 00:54:41.129749 | orchestrator | Monday 30 March 2026 00:47:36 +0000 (0:00:00.847) 0:03:01.932 ********** 2026-03-30 00:54:41.129754 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.129758 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.129763 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129767 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129772 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129777 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129781 | orchestrator | 2026-03-30 00:54:41.129798 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-30 00:54:41.129804 | orchestrator | Monday 30 March 2026 00:47:37 +0000 (0:00:00.844) 0:03:02.776 ********** 2026-03-30 00:54:41.129808 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.129813 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.129817 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129822 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129826 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129831 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129835 | orchestrator | 2026-03-30 00:54:41.129840 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-30 00:54:41.129844 | orchestrator | Monday 30 March 2026 00:47:38 +0000 (0:00:00.690) 0:03:03.466 ********** 2026-03-30 00:54:41.129849 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.129853 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.129858 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129862 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129872 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129876 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129881 | orchestrator | 2026-03-30 00:54:41.129885 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-30 00:54:41.129890 | orchestrator | Monday 30 March 2026 00:47:39 +0000 (0:00:00.923) 0:03:04.389 ********** 2026-03-30 00:54:41.129894 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.129899 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.129903 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129907 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129912 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129916 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129921 | orchestrator | 2026-03-30 00:54:41.129925 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-30 00:54:41.129930 | orchestrator | Monday 30 March 2026 00:47:40 +0000 (0:00:01.218) 0:03:05.608 ********** 2026-03-30 00:54:41.129935 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.129939 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.129944 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129948 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129952 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129957 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.129961 | orchestrator | 2026-03-30 00:54:41.129966 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-30 00:54:41.129970 | orchestrator | Monday 30 March 2026 00:47:41 +0000 (0:00:01.089) 0:03:06.698 ********** 2026-03-30 00:54:41.129975 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.129979 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.129984 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.129988 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.129993 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.129997 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130002 | orchestrator | 2026-03-30 00:54:41.130006 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-30 00:54:41.130011 | orchestrator | Monday 30 March 2026 00:47:42 +0000 (0:00:00.733) 0:03:07.432 ********** 2026-03-30 00:54:41.130044 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130049 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130054 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130058 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.130063 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.130067 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.130072 | orchestrator | 2026-03-30 00:54:41.130076 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-30 00:54:41.130081 | orchestrator | Monday 30 March 2026 00:47:43 +0000 (0:00:01.887) 0:03:09.320 ********** 2026-03-30 00:54:41.130085 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.130090 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.130094 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.130099 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130103 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130108 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130112 | orchestrator | 2026-03-30 00:54:41.130117 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-30 00:54:41.130124 | orchestrator | Monday 30 March 2026 00:47:44 +0000 (0:00:00.437) 0:03:09.757 ********** 2026-03-30 00:54:41.130129 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.130133 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.130138 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.130142 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130147 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130152 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130160 | orchestrator | 2026-03-30 00:54:41.130164 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-30 00:54:41.130169 | orchestrator | Monday 30 March 2026 00:47:45 +0000 (0:00:00.695) 0:03:10.452 ********** 2026-03-30 00:54:41.130173 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130178 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130182 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130187 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130191 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130196 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130200 | orchestrator | 2026-03-30 00:54:41.130205 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-30 00:54:41.130209 | orchestrator | Monday 30 March 2026 00:47:45 +0000 (0:00:00.500) 0:03:10.953 ********** 2026-03-30 00:54:41.130214 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.130219 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.130223 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.130228 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130247 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130252 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130257 | orchestrator | 2026-03-30 00:54:41.130261 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-30 00:54:41.130266 | orchestrator | Monday 30 March 2026 00:47:46 +0000 (0:00:00.696) 0:03:11.649 ********** 2026-03-30 00:54:41.130272 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-30 00:54:41.130277 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-30 00:54:41.130283 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-30 00:54:41.130287 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-30 00:54:41.130292 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130297 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130301 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-30 00:54:41.130306 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-30 00:54:41.130311 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130318 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130323 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130328 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130332 | orchestrator | 2026-03-30 00:54:41.130337 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-30 00:54:41.130341 | orchestrator | Monday 30 March 2026 00:47:46 +0000 (0:00:00.657) 0:03:12.307 ********** 2026-03-30 00:54:41.130346 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130350 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130355 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130359 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130364 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130368 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130373 | orchestrator | 2026-03-30 00:54:41.130377 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-30 00:54:41.130384 | orchestrator | Monday 30 March 2026 00:47:47 +0000 (0:00:00.674) 0:03:12.981 ********** 2026-03-30 00:54:41.130389 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130393 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130398 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130402 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130407 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130411 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130416 | orchestrator | 2026-03-30 00:54:41.130420 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-30 00:54:41.130425 | orchestrator | Monday 30 March 2026 00:47:48 +0000 (0:00:00.497) 0:03:13.479 ********** 2026-03-30 00:54:41.130430 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130434 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130439 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130443 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130448 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130452 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130457 | orchestrator | 2026-03-30 00:54:41.130461 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-30 00:54:41.130466 | orchestrator | Monday 30 March 2026 00:47:48 +0000 (0:00:00.670) 0:03:14.150 ********** 2026-03-30 00:54:41.130470 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130475 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130479 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130484 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130488 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130493 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130497 | orchestrator | 2026-03-30 00:54:41.130502 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-30 00:54:41.130519 | orchestrator | Monday 30 March 2026 00:47:49 +0000 (0:00:00.604) 0:03:14.754 ********** 2026-03-30 00:54:41.130524 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130529 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130533 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130538 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130542 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130547 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130551 | orchestrator | 2026-03-30 00:54:41.130556 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-30 00:54:41.130560 | orchestrator | Monday 30 March 2026 00:47:50 +0000 (0:00:00.747) 0:03:15.502 ********** 2026-03-30 00:54:41.130565 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.130569 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.130574 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130602 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.130607 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130616 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130620 | orchestrator | 2026-03-30 00:54:41.130625 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-30 00:54:41.130630 | orchestrator | Monday 30 March 2026 00:47:50 +0000 (0:00:00.729) 0:03:16.231 ********** 2026-03-30 00:54:41.130634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.130639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.130643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.130648 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130652 | orchestrator | 2026-03-30 00:54:41.130657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-30 00:54:41.130662 | orchestrator | Monday 30 March 2026 00:47:51 +0000 (0:00:00.352) 0:03:16.584 ********** 2026-03-30 00:54:41.130666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.130671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.130675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.130680 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130684 | orchestrator | 2026-03-30 00:54:41.130689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-30 00:54:41.130693 | orchestrator | Monday 30 March 2026 00:47:51 +0000 (0:00:00.507) 0:03:17.092 ********** 2026-03-30 00:54:41.130698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.130702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.130707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.130711 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130716 | orchestrator | 2026-03-30 00:54:41.130720 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-30 00:54:41.130725 | orchestrator | Monday 30 March 2026 00:47:52 +0000 (0:00:00.555) 0:03:17.647 ********** 2026-03-30 00:54:41.130729 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.130734 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.130738 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.130743 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130747 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130752 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130756 | orchestrator | 2026-03-30 00:54:41.130761 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-30 00:54:41.130765 | orchestrator | Monday 30 March 2026 00:47:53 +0000 (0:00:00.762) 0:03:18.410 ********** 2026-03-30 00:54:41.130770 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-30 00:54:41.130774 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-30 00:54:41.130779 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-30 00:54:41.130783 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-30 00:54:41.130788 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.130792 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-30 00:54:41.130797 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.130801 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-30 00:54:41.130806 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.130810 | orchestrator | 2026-03-30 00:54:41.130815 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-30 00:54:41.130822 | orchestrator | Monday 30 March 2026 00:47:54 +0000 (0:00:01.640) 0:03:20.051 ********** 2026-03-30 00:54:41.130826 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.130831 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.130835 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.130840 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.130844 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.130849 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.130853 | orchestrator | 2026-03-30 00:54:41.130861 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-30 00:54:41.130865 | orchestrator | Monday 30 March 2026 00:47:56 +0000 (0:00:02.248) 0:03:22.299 ********** 2026-03-30 00:54:41.130870 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.130874 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.130879 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.130883 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.130888 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.130892 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.130897 | orchestrator | 2026-03-30 00:54:41.130901 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-30 00:54:41.130906 | orchestrator | Monday 30 March 2026 00:47:58 +0000 (0:00:01.905) 0:03:24.205 ********** 2026-03-30 00:54:41.130910 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.130915 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.130919 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.130924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.130928 | orchestrator | 2026-03-30 00:54:41.130933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-30 00:54:41.130951 | orchestrator | Monday 30 March 2026 00:48:00 +0000 (0:00:01.175) 0:03:25.381 ********** 2026-03-30 00:54:41.130957 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.130961 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.130966 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.130970 | orchestrator | 2026-03-30 00:54:41.130975 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-30 00:54:41.130979 | orchestrator | Monday 30 March 2026 00:48:00 +0000 (0:00:00.459) 0:03:25.840 ********** 2026-03-30 00:54:41.130984 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.130988 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.130993 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.130997 | orchestrator | 2026-03-30 00:54:41.131002 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-30 00:54:41.131006 | orchestrator | Monday 30 March 2026 00:48:01 +0000 (0:00:01.321) 0:03:27.161 ********** 2026-03-30 00:54:41.131011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-30 00:54:41.131015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-30 00:54:41.131020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-30 00:54:41.131024 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.131029 | orchestrator | 2026-03-30 00:54:41.131033 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-30 00:54:41.131038 | orchestrator | Monday 30 March 2026 00:48:02 +0000 (0:00:00.865) 0:03:28.027 ********** 2026-03-30 00:54:41.131042 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.131047 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.131051 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.131056 | orchestrator | 2026-03-30 00:54:41.131060 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-30 00:54:41.131065 | orchestrator | Monday 30 March 2026 00:48:03 +0000 (0:00:00.386) 0:03:28.414 ********** 2026-03-30 00:54:41.131069 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.131074 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.131078 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.131083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.131087 | orchestrator | 2026-03-30 00:54:41.131092 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-30 00:54:41.131096 | orchestrator | Monday 30 March 2026 00:48:04 +0000 (0:00:01.264) 0:03:29.678 ********** 2026-03-30 00:54:41.131101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.131105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.131113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.131117 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131122 | orchestrator | 2026-03-30 00:54:41.131126 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-30 00:54:41.131131 | orchestrator | Monday 30 March 2026 00:48:04 +0000 (0:00:00.382) 0:03:30.061 ********** 2026-03-30 00:54:41.131135 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131140 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.131144 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.131149 | orchestrator | 2026-03-30 00:54:41.131153 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-30 00:54:41.131158 | orchestrator | Monday 30 March 2026 00:48:05 +0000 (0:00:00.388) 0:03:30.450 ********** 2026-03-30 00:54:41.131162 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131167 | orchestrator | 2026-03-30 00:54:41.131171 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-30 00:54:41.131176 | orchestrator | Monday 30 March 2026 00:48:05 +0000 (0:00:00.702) 0:03:31.152 ********** 2026-03-30 00:54:41.131180 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131185 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.131189 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.131194 | orchestrator | 2026-03-30 00:54:41.131198 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-30 00:54:41.131203 | orchestrator | Monday 30 March 2026 00:48:06 +0000 (0:00:00.329) 0:03:31.482 ********** 2026-03-30 00:54:41.131207 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131212 | orchestrator | 2026-03-30 00:54:41.131216 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-30 00:54:41.131221 | orchestrator | Monday 30 March 2026 00:48:06 +0000 (0:00:00.222) 0:03:31.705 ********** 2026-03-30 00:54:41.131225 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131230 | orchestrator | 2026-03-30 00:54:41.131234 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-30 00:54:41.131253 | orchestrator | Monday 30 March 2026 00:48:06 +0000 (0:00:00.250) 0:03:31.956 ********** 2026-03-30 00:54:41.131258 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131263 | orchestrator | 2026-03-30 00:54:41.131267 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-30 00:54:41.131272 | orchestrator | Monday 30 March 2026 00:48:06 +0000 (0:00:00.120) 0:03:32.076 ********** 2026-03-30 00:54:41.131276 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131281 | orchestrator | 2026-03-30 00:54:41.131285 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-30 00:54:41.131290 | orchestrator | Monday 30 March 2026 00:48:07 +0000 (0:00:00.328) 0:03:32.405 ********** 2026-03-30 00:54:41.131294 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131298 | orchestrator | 2026-03-30 00:54:41.131303 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-30 00:54:41.131307 | orchestrator | Monday 30 March 2026 00:48:07 +0000 (0:00:00.339) 0:03:32.745 ********** 2026-03-30 00:54:41.131312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.131316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.131321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.131326 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131330 | orchestrator | 2026-03-30 00:54:41.131335 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-30 00:54:41.131352 | orchestrator | Monday 30 March 2026 00:48:07 +0000 (0:00:00.423) 0:03:33.169 ********** 2026-03-30 00:54:41.131357 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131362 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.131367 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.131371 | orchestrator | 2026-03-30 00:54:41.131381 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-30 00:54:41.131385 | orchestrator | Monday 30 March 2026 00:48:08 +0000 (0:00:00.550) 0:03:33.719 ********** 2026-03-30 00:54:41.131390 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131394 | orchestrator | 2026-03-30 00:54:41.131399 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-30 00:54:41.131403 | orchestrator | Monday 30 March 2026 00:48:08 +0000 (0:00:00.245) 0:03:33.964 ********** 2026-03-30 00:54:41.131408 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131412 | orchestrator | 2026-03-30 00:54:41.131417 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-30 00:54:41.131421 | orchestrator | Monday 30 March 2026 00:48:08 +0000 (0:00:00.226) 0:03:34.191 ********** 2026-03-30 00:54:41.131426 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.131430 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.131435 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.131439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.131444 | orchestrator | 2026-03-30 00:54:41.131449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-30 00:54:41.131453 | orchestrator | Monday 30 March 2026 00:48:09 +0000 (0:00:00.801) 0:03:34.992 ********** 2026-03-30 00:54:41.131458 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.131462 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.131467 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.131472 | orchestrator | 2026-03-30 00:54:41.131476 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-30 00:54:41.131481 | orchestrator | Monday 30 March 2026 00:48:10 +0000 (0:00:00.546) 0:03:35.539 ********** 2026-03-30 00:54:41.131485 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.131490 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.131494 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.131499 | orchestrator | 2026-03-30 00:54:41.131503 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-30 00:54:41.131508 | orchestrator | Monday 30 March 2026 00:48:11 +0000 (0:00:01.152) 0:03:36.691 ********** 2026-03-30 00:54:41.131512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.131517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.131521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.131526 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131531 | orchestrator | 2026-03-30 00:54:41.131535 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-30 00:54:41.131540 | orchestrator | Monday 30 March 2026 00:48:11 +0000 (0:00:00.645) 0:03:37.336 ********** 2026-03-30 00:54:41.131544 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.131549 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.131553 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.131558 | orchestrator | 2026-03-30 00:54:41.131562 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-30 00:54:41.131567 | orchestrator | Monday 30 March 2026 00:48:12 +0000 (0:00:00.386) 0:03:37.723 ********** 2026-03-30 00:54:41.131571 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.131576 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.131590 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.131595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.131600 | orchestrator | 2026-03-30 00:54:41.131604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-30 00:54:41.131609 | orchestrator | Monday 30 March 2026 00:48:13 +0000 (0:00:01.311) 0:03:39.034 ********** 2026-03-30 00:54:41.131614 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.131618 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.131626 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.131630 | orchestrator | 2026-03-30 00:54:41.131638 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-30 00:54:41.131646 | orchestrator | Monday 30 March 2026 00:48:14 +0000 (0:00:00.377) 0:03:39.412 ********** 2026-03-30 00:54:41.131653 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.131661 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.131668 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.131674 | orchestrator | 2026-03-30 00:54:41.131681 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-30 00:54:41.131689 | orchestrator | Monday 30 March 2026 00:48:15 +0000 (0:00:01.471) 0:03:40.883 ********** 2026-03-30 00:54:41.131697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.131703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.131708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.131712 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131717 | orchestrator | 2026-03-30 00:54:41.131721 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-30 00:54:41.131726 | orchestrator | Monday 30 March 2026 00:48:16 +0000 (0:00:00.746) 0:03:41.629 ********** 2026-03-30 00:54:41.131731 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.131735 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.131740 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.131744 | orchestrator | 2026-03-30 00:54:41.131749 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-30 00:54:41.131753 | orchestrator | Monday 30 March 2026 00:48:16 +0000 (0:00:00.420) 0:03:42.050 ********** 2026-03-30 00:54:41.131758 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131763 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.131767 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.131772 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.131776 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.131796 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.131801 | orchestrator | 2026-03-30 00:54:41.131806 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-30 00:54:41.131810 | orchestrator | Monday 30 March 2026 00:48:17 +0000 (0:00:00.646) 0:03:42.696 ********** 2026-03-30 00:54:41.131815 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.131819 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.131824 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.131828 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.131833 | orchestrator | 2026-03-30 00:54:41.131838 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-30 00:54:41.131842 | orchestrator | Monday 30 March 2026 00:48:18 +0000 (0:00:00.975) 0:03:43.672 ********** 2026-03-30 00:54:41.131847 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.131851 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.131856 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.131860 | orchestrator | 2026-03-30 00:54:41.131864 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-30 00:54:41.131869 | orchestrator | Monday 30 March 2026 00:48:18 +0000 (0:00:00.341) 0:03:44.013 ********** 2026-03-30 00:54:41.131873 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.131878 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.131882 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.131887 | orchestrator | 2026-03-30 00:54:41.131892 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-30 00:54:41.131896 | orchestrator | Monday 30 March 2026 00:48:20 +0000 (0:00:01.478) 0:03:45.491 ********** 2026-03-30 00:54:41.131901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-30 00:54:41.131905 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-30 00:54:41.131914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-30 00:54:41.131918 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.131923 | orchestrator | 2026-03-30 00:54:41.131927 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-30 00:54:41.131932 | orchestrator | Monday 30 March 2026 00:48:20 +0000 (0:00:00.614) 0:03:46.106 ********** 2026-03-30 00:54:41.131936 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.131941 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.131945 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.131950 | orchestrator | 2026-03-30 00:54:41.131954 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-30 00:54:41.131959 | orchestrator | 2026-03-30 00:54:41.131963 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.131968 | orchestrator | Monday 30 March 2026 00:48:21 +0000 (0:00:00.607) 0:03:46.714 ********** 2026-03-30 00:54:41.131972 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.131977 | orchestrator | 2026-03-30 00:54:41.131981 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.131986 | orchestrator | Monday 30 March 2026 00:48:21 +0000 (0:00:00.566) 0:03:47.280 ********** 2026-03-30 00:54:41.131991 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.131995 | orchestrator | 2026-03-30 00:54:41.132000 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.132004 | orchestrator | Monday 30 March 2026 00:48:22 +0000 (0:00:00.515) 0:03:47.796 ********** 2026-03-30 00:54:41.132009 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132013 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132018 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132022 | orchestrator | 2026-03-30 00:54:41.132027 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.132031 | orchestrator | Monday 30 March 2026 00:48:23 +0000 (0:00:00.680) 0:03:48.477 ********** 2026-03-30 00:54:41.132036 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132040 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132045 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132049 | orchestrator | 2026-03-30 00:54:41.132056 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.132061 | orchestrator | Monday 30 March 2026 00:48:23 +0000 (0:00:00.269) 0:03:48.746 ********** 2026-03-30 00:54:41.132065 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132070 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132074 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132079 | orchestrator | 2026-03-30 00:54:41.132083 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.132088 | orchestrator | Monday 30 March 2026 00:48:23 +0000 (0:00:00.520) 0:03:49.266 ********** 2026-03-30 00:54:41.132092 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132097 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132101 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132106 | orchestrator | 2026-03-30 00:54:41.132110 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.132115 | orchestrator | Monday 30 March 2026 00:48:24 +0000 (0:00:00.344) 0:03:49.611 ********** 2026-03-30 00:54:41.132119 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132124 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132128 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132133 | orchestrator | 2026-03-30 00:54:41.132137 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.132142 | orchestrator | Monday 30 March 2026 00:48:24 +0000 (0:00:00.742) 0:03:50.353 ********** 2026-03-30 00:54:41.132146 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132154 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132158 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132163 | orchestrator | 2026-03-30 00:54:41.132167 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.132172 | orchestrator | Monday 30 March 2026 00:48:25 +0000 (0:00:00.325) 0:03:50.678 ********** 2026-03-30 00:54:41.132189 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132194 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132199 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132204 | orchestrator | 2026-03-30 00:54:41.132208 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.132213 | orchestrator | Monday 30 March 2026 00:48:25 +0000 (0:00:00.564) 0:03:51.243 ********** 2026-03-30 00:54:41.132217 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132222 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132226 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132231 | orchestrator | 2026-03-30 00:54:41.132235 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.132240 | orchestrator | Monday 30 March 2026 00:48:26 +0000 (0:00:00.825) 0:03:52.069 ********** 2026-03-30 00:54:41.132244 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132249 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132253 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132258 | orchestrator | 2026-03-30 00:54:41.132262 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.132267 | orchestrator | Monday 30 March 2026 00:48:27 +0000 (0:00:00.841) 0:03:52.911 ********** 2026-03-30 00:54:41.132271 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132276 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132280 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132285 | orchestrator | 2026-03-30 00:54:41.132289 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.132294 | orchestrator | Monday 30 March 2026 00:48:28 +0000 (0:00:00.800) 0:03:53.711 ********** 2026-03-30 00:54:41.132298 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132303 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132307 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132312 | orchestrator | 2026-03-30 00:54:41.132316 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.132321 | orchestrator | Monday 30 March 2026 00:48:29 +0000 (0:00:00.769) 0:03:54.481 ********** 2026-03-30 00:54:41.132325 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132330 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132334 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132339 | orchestrator | 2026-03-30 00:54:41.132343 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.132348 | orchestrator | Monday 30 March 2026 00:48:29 +0000 (0:00:00.272) 0:03:54.753 ********** 2026-03-30 00:54:41.132353 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132357 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132362 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132366 | orchestrator | 2026-03-30 00:54:41.132371 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.132375 | orchestrator | Monday 30 March 2026 00:48:29 +0000 (0:00:00.280) 0:03:55.034 ********** 2026-03-30 00:54:41.132380 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132384 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132389 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132393 | orchestrator | 2026-03-30 00:54:41.132398 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.132402 | orchestrator | Monday 30 March 2026 00:48:30 +0000 (0:00:01.091) 0:03:56.126 ********** 2026-03-30 00:54:41.132407 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132412 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132419 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132424 | orchestrator | 2026-03-30 00:54:41.132428 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.132433 | orchestrator | Monday 30 March 2026 00:48:31 +0000 (0:00:00.755) 0:03:56.881 ********** 2026-03-30 00:54:41.132437 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132442 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.132446 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.132451 | orchestrator | 2026-03-30 00:54:41.132455 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.132460 | orchestrator | Monday 30 March 2026 00:48:31 +0000 (0:00:00.302) 0:03:57.183 ********** 2026-03-30 00:54:41.132464 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132469 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132473 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132478 | orchestrator | 2026-03-30 00:54:41.132484 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.132489 | orchestrator | Monday 30 March 2026 00:48:32 +0000 (0:00:00.450) 0:03:57.634 ********** 2026-03-30 00:54:41.132494 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132498 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132503 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132507 | orchestrator | 2026-03-30 00:54:41.132511 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.132516 | orchestrator | Monday 30 March 2026 00:48:32 +0000 (0:00:00.549) 0:03:58.183 ********** 2026-03-30 00:54:41.132520 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132525 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132529 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132534 | orchestrator | 2026-03-30 00:54:41.132538 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-30 00:54:41.132543 | orchestrator | Monday 30 March 2026 00:48:33 +0000 (0:00:00.884) 0:03:59.068 ********** 2026-03-30 00:54:41.132547 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132552 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132556 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132561 | orchestrator | 2026-03-30 00:54:41.132565 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-30 00:54:41.132570 | orchestrator | Monday 30 March 2026 00:48:34 +0000 (0:00:00.637) 0:03:59.706 ********** 2026-03-30 00:54:41.132574 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.132590 | orchestrator | 2026-03-30 00:54:41.132599 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-30 00:54:41.132605 | orchestrator | Monday 30 March 2026 00:48:35 +0000 (0:00:00.804) 0:04:00.510 ********** 2026-03-30 00:54:41.132610 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.132614 | orchestrator | 2026-03-30 00:54:41.132633 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-30 00:54:41.132638 | orchestrator | Monday 30 March 2026 00:48:35 +0000 (0:00:00.341) 0:04:00.852 ********** 2026-03-30 00:54:41.132643 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-30 00:54:41.132647 | orchestrator | 2026-03-30 00:54:41.132652 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-30 00:54:41.132656 | orchestrator | Monday 30 March 2026 00:48:36 +0000 (0:00:00.963) 0:04:01.815 ********** 2026-03-30 00:54:41.132661 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132666 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132670 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132675 | orchestrator | 2026-03-30 00:54:41.132679 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-30 00:54:41.132684 | orchestrator | Monday 30 March 2026 00:48:36 +0000 (0:00:00.312) 0:04:02.127 ********** 2026-03-30 00:54:41.132688 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132693 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132700 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132705 | orchestrator | 2026-03-30 00:54:41.132709 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-30 00:54:41.132714 | orchestrator | Monday 30 March 2026 00:48:37 +0000 (0:00:00.363) 0:04:02.491 ********** 2026-03-30 00:54:41.132718 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.132723 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.132728 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.132732 | orchestrator | 2026-03-30 00:54:41.132737 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-30 00:54:41.132741 | orchestrator | Monday 30 March 2026 00:48:38 +0000 (0:00:01.057) 0:04:03.548 ********** 2026-03-30 00:54:41.132746 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.132750 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.132755 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.132759 | orchestrator | 2026-03-30 00:54:41.132764 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-30 00:54:41.132768 | orchestrator | Monday 30 March 2026 00:48:39 +0000 (0:00:00.918) 0:04:04.467 ********** 2026-03-30 00:54:41.132773 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.132777 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.132782 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.132786 | orchestrator | 2026-03-30 00:54:41.132791 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-30 00:54:41.132795 | orchestrator | Monday 30 March 2026 00:48:39 +0000 (0:00:00.640) 0:04:05.107 ********** 2026-03-30 00:54:41.132800 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132804 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132809 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132813 | orchestrator | 2026-03-30 00:54:41.132818 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-30 00:54:41.132822 | orchestrator | Monday 30 March 2026 00:48:40 +0000 (0:00:01.012) 0:04:06.120 ********** 2026-03-30 00:54:41.132827 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.132831 | orchestrator | 2026-03-30 00:54:41.132836 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-30 00:54:41.132841 | orchestrator | Monday 30 March 2026 00:48:41 +0000 (0:00:01.219) 0:04:07.340 ********** 2026-03-30 00:54:41.132845 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132849 | orchestrator | 2026-03-30 00:54:41.132854 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-30 00:54:41.132858 | orchestrator | Monday 30 March 2026 00:48:42 +0000 (0:00:00.723) 0:04:08.063 ********** 2026-03-30 00:54:41.132863 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 00:54:41.132868 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.132872 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.132877 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:54:41.132881 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-30 00:54:41.132886 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:54:41.132890 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:54:41.132897 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-30 00:54:41.132902 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:54:41.132907 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-30 00:54:41.132911 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-30 00:54:41.132916 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-30 00:54:41.132920 | orchestrator | 2026-03-30 00:54:41.132925 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-30 00:54:41.132929 | orchestrator | Monday 30 March 2026 00:48:46 +0000 (0:00:03.714) 0:04:11.777 ********** 2026-03-30 00:54:41.132937 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.132941 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.132946 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.132950 | orchestrator | 2026-03-30 00:54:41.132955 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-30 00:54:41.132959 | orchestrator | Monday 30 March 2026 00:48:47 +0000 (0:00:01.565) 0:04:13.342 ********** 2026-03-30 00:54:41.132964 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132968 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.132973 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.132977 | orchestrator | 2026-03-30 00:54:41.132982 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-30 00:54:41.132986 | orchestrator | Monday 30 March 2026 00:48:48 +0000 (0:00:00.293) 0:04:13.636 ********** 2026-03-30 00:54:41.132991 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.132995 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133000 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133004 | orchestrator | 2026-03-30 00:54:41.133009 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-30 00:54:41.133013 | orchestrator | Monday 30 March 2026 00:48:48 +0000 (0:00:00.288) 0:04:13.925 ********** 2026-03-30 00:54:41.133018 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.133035 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.133041 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.133045 | orchestrator | 2026-03-30 00:54:41.133050 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-30 00:54:41.133054 | orchestrator | Monday 30 March 2026 00:48:50 +0000 (0:00:02.126) 0:04:16.051 ********** 2026-03-30 00:54:41.133059 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.133063 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.133068 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.133072 | orchestrator | 2026-03-30 00:54:41.133077 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-30 00:54:41.133082 | orchestrator | Monday 30 March 2026 00:48:52 +0000 (0:00:01.614) 0:04:17.665 ********** 2026-03-30 00:54:41.133086 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133091 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133095 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133100 | orchestrator | 2026-03-30 00:54:41.133104 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-30 00:54:41.133109 | orchestrator | Monday 30 March 2026 00:48:52 +0000 (0:00:00.245) 0:04:17.911 ********** 2026-03-30 00:54:41.133113 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.133118 | orchestrator | 2026-03-30 00:54:41.133122 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-30 00:54:41.133127 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.460) 0:04:18.372 ********** 2026-03-30 00:54:41.133131 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133136 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133140 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133145 | orchestrator | 2026-03-30 00:54:41.133149 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-30 00:54:41.133154 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.387) 0:04:18.759 ********** 2026-03-30 00:54:41.133159 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133163 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133168 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133172 | orchestrator | 2026-03-30 00:54:41.133177 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-30 00:54:41.133181 | orchestrator | Monday 30 March 2026 00:48:53 +0000 (0:00:00.310) 0:04:19.070 ********** 2026-03-30 00:54:41.133186 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.133193 | orchestrator | 2026-03-30 00:54:41.133197 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-30 00:54:41.133202 | orchestrator | Monday 30 March 2026 00:48:54 +0000 (0:00:00.451) 0:04:19.521 ********** 2026-03-30 00:54:41.133206 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.133211 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.133216 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.133220 | orchestrator | 2026-03-30 00:54:41.133225 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-30 00:54:41.133229 | orchestrator | Monday 30 March 2026 00:48:55 +0000 (0:00:01.601) 0:04:21.122 ********** 2026-03-30 00:54:41.133234 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.133238 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.133243 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.133247 | orchestrator | 2026-03-30 00:54:41.133252 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-30 00:54:41.133256 | orchestrator | Monday 30 March 2026 00:48:57 +0000 (0:00:01.405) 0:04:22.528 ********** 2026-03-30 00:54:41.133261 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.133265 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.133270 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.133275 | orchestrator | 2026-03-30 00:54:41.133279 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-30 00:54:41.133284 | orchestrator | Monday 30 March 2026 00:48:59 +0000 (0:00:01.859) 0:04:24.387 ********** 2026-03-30 00:54:41.133288 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.133293 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.133297 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.133302 | orchestrator | 2026-03-30 00:54:41.133308 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-30 00:54:41.133313 | orchestrator | Monday 30 March 2026 00:49:01 +0000 (0:00:02.086) 0:04:26.474 ********** 2026-03-30 00:54:41.133318 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.133322 | orchestrator | 2026-03-30 00:54:41.133327 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-30 00:54:41.133331 | orchestrator | Monday 30 March 2026 00:49:02 +0000 (0:00:01.071) 0:04:27.545 ********** 2026-03-30 00:54:41.133336 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-30 00:54:41.133340 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133345 | orchestrator | 2026-03-30 00:54:41.133349 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-30 00:54:41.133354 | orchestrator | Monday 30 March 2026 00:49:23 +0000 (0:00:21.475) 0:04:49.020 ********** 2026-03-30 00:54:41.133358 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133363 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133367 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133372 | orchestrator | 2026-03-30 00:54:41.133376 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-30 00:54:41.133381 | orchestrator | Monday 30 March 2026 00:49:29 +0000 (0:00:06.188) 0:04:55.209 ********** 2026-03-30 00:54:41.133385 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133390 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133395 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133399 | orchestrator | 2026-03-30 00:54:41.133403 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-30 00:54:41.133421 | orchestrator | Monday 30 March 2026 00:49:30 +0000 (0:00:00.286) 0:04:55.496 ********** 2026-03-30 00:54:41.133427 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-30 00:54:41.133436 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-30 00:54:41.133441 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-30 00:54:41.133446 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-30 00:54:41.133451 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-30 00:54:41.133456 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fff864e1afb9df80c8268d3d03b52b67350d7960'}])  2026-03-30 00:54:41.133461 | orchestrator | 2026-03-30 00:54:41.133466 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-30 00:54:41.133470 | orchestrator | Monday 30 March 2026 00:49:38 +0000 (0:00:08.675) 0:05:04.171 ********** 2026-03-30 00:54:41.133475 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133479 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133484 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133488 | orchestrator | 2026-03-30 00:54:41.133493 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-30 00:54:41.133497 | orchestrator | Monday 30 March 2026 00:49:39 +0000 (0:00:00.309) 0:05:04.481 ********** 2026-03-30 00:54:41.133504 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.133509 | orchestrator | 2026-03-30 00:54:41.133513 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-30 00:54:41.133518 | orchestrator | Monday 30 March 2026 00:49:39 +0000 (0:00:00.465) 0:05:04.946 ********** 2026-03-30 00:54:41.133522 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133527 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133531 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133536 | orchestrator | 2026-03-30 00:54:41.133540 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-30 00:54:41.133544 | orchestrator | Monday 30 March 2026 00:49:39 +0000 (0:00:00.410) 0:05:05.357 ********** 2026-03-30 00:54:41.133549 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133553 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133558 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133562 | orchestrator | 2026-03-30 00:54:41.133567 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-30 00:54:41.133575 | orchestrator | Monday 30 March 2026 00:49:40 +0000 (0:00:00.271) 0:05:05.628 ********** 2026-03-30 00:54:41.133606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-30 00:54:41.133613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-30 00:54:41.133620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-30 00:54:41.133628 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133633 | orchestrator | 2026-03-30 00:54:41.133637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-30 00:54:41.133642 | orchestrator | Monday 30 March 2026 00:49:40 +0000 (0:00:00.597) 0:05:06.226 ********** 2026-03-30 00:54:41.133647 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133651 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133671 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133676 | orchestrator | 2026-03-30 00:54:41.133680 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-30 00:54:41.133685 | orchestrator | 2026-03-30 00:54:41.133689 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.133694 | orchestrator | Monday 30 March 2026 00:49:41 +0000 (0:00:00.638) 0:05:06.864 ********** 2026-03-30 00:54:41.133699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-03-30 00:54:41.133703 | orchestrator | 2026-03-30 00:54:41.133708 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.133712 | orchestrator | Monday 30 March 2026 00:49:41 +0000 (0:00:00.443) 0:05:07.307 ********** 2026-03-30 00:54:41.133717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-30 00:54:41.133721 | orchestrator | 2026-03-30 00:54:41.133726 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.133730 | orchestrator | Monday 30 March 2026 00:49:42 +0000 (0:00:00.439) 0:05:07.747 ********** 2026-03-30 00:54:41.133735 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133739 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133744 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133748 | orchestrator | 2026-03-30 00:54:41.133753 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.133757 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:00.925) 0:05:08.672 ********** 2026-03-30 00:54:41.133762 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133766 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133771 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133775 | orchestrator | 2026-03-30 00:54:41.133780 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.133784 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:00.250) 0:05:08.923 ********** 2026-03-30 00:54:41.133789 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133793 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133798 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133803 | orchestrator | 2026-03-30 00:54:41.133807 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.133812 | orchestrator | Monday 30 March 2026 00:49:43 +0000 (0:00:00.255) 0:05:09.179 ********** 2026-03-30 00:54:41.133816 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133821 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133825 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133830 | orchestrator | 2026-03-30 00:54:41.133834 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.133839 | orchestrator | Monday 30 March 2026 00:49:44 +0000 (0:00:00.254) 0:05:09.434 ********** 2026-03-30 00:54:41.133843 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133848 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133853 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133857 | orchestrator | 2026-03-30 00:54:41.133865 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.133870 | orchestrator | Monday 30 March 2026 00:49:44 +0000 (0:00:00.858) 0:05:10.292 ********** 2026-03-30 00:54:41.133874 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133879 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133884 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133888 | orchestrator | 2026-03-30 00:54:41.133893 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.133897 | orchestrator | Monday 30 March 2026 00:49:45 +0000 (0:00:00.277) 0:05:10.569 ********** 2026-03-30 00:54:41.133902 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.133906 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.133911 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.133915 | orchestrator | 2026-03-30 00:54:41.133920 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.133924 | orchestrator | Monday 30 March 2026 00:49:45 +0000 (0:00:00.255) 0:05:10.825 ********** 2026-03-30 00:54:41.133929 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133933 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133940 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.133945 | orchestrator | 2026-03-30 00:54:41.133953 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.133960 | orchestrator | Monday 30 March 2026 00:49:46 +0000 (0:00:00.664) 0:05:11.490 ********** 2026-03-30 00:54:41.133968 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.133975 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.133991 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134004 | orchestrator | 2026-03-30 00:54:41.134010 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.134043 | orchestrator | Monday 30 March 2026 00:49:46 +0000 (0:00:00.841) 0:05:12.331 ********** 2026-03-30 00:54:41.134051 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134059 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134066 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134074 | orchestrator | 2026-03-30 00:54:41.134080 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.134085 | orchestrator | Monday 30 March 2026 00:49:47 +0000 (0:00:00.285) 0:05:12.616 ********** 2026-03-30 00:54:41.134089 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.134094 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.134098 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134103 | orchestrator | 2026-03-30 00:54:41.134107 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.134112 | orchestrator | Monday 30 March 2026 00:49:47 +0000 (0:00:00.282) 0:05:12.899 ********** 2026-03-30 00:54:41.134116 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134121 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134126 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134131 | orchestrator | 2026-03-30 00:54:41.134136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.134162 | orchestrator | Monday 30 March 2026 00:49:47 +0000 (0:00:00.274) 0:05:13.174 ********** 2026-03-30 00:54:41.134169 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134174 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134179 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134184 | orchestrator | 2026-03-30 00:54:41.134189 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.134194 | orchestrator | Monday 30 March 2026 00:49:48 +0000 (0:00:00.447) 0:05:13.621 ********** 2026-03-30 00:54:41.134207 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134213 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134223 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134228 | orchestrator | 2026-03-30 00:54:41.134233 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.134243 | orchestrator | Monday 30 March 2026 00:49:48 +0000 (0:00:00.251) 0:05:13.872 ********** 2026-03-30 00:54:41.134249 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134254 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134259 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134264 | orchestrator | 2026-03-30 00:54:41.134269 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.134274 | orchestrator | Monday 30 March 2026 00:49:48 +0000 (0:00:00.272) 0:05:14.145 ********** 2026-03-30 00:54:41.134279 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134284 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134289 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134294 | orchestrator | 2026-03-30 00:54:41.134299 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.134304 | orchestrator | Monday 30 March 2026 00:49:49 +0000 (0:00:00.281) 0:05:14.426 ********** 2026-03-30 00:54:41.134309 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.134314 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.134319 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134324 | orchestrator | 2026-03-30 00:54:41.134329 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.134335 | orchestrator | Monday 30 March 2026 00:49:49 +0000 (0:00:00.293) 0:05:14.720 ********** 2026-03-30 00:54:41.134340 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.134345 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.134350 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134355 | orchestrator | 2026-03-30 00:54:41.134360 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.134365 | orchestrator | Monday 30 March 2026 00:49:49 +0000 (0:00:00.452) 0:05:15.172 ********** 2026-03-30 00:54:41.134370 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.134375 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.134380 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134385 | orchestrator | 2026-03-30 00:54:41.134390 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-30 00:54:41.134395 | orchestrator | Monday 30 March 2026 00:49:50 +0000 (0:00:00.523) 0:05:15.696 ********** 2026-03-30 00:54:41.134400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-30 00:54:41.134405 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:54:41.134411 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:54:41.134416 | orchestrator | 2026-03-30 00:54:41.134421 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-30 00:54:41.134426 | orchestrator | Monday 30 March 2026 00:49:51 +0000 (0:00:00.898) 0:05:16.594 ********** 2026-03-30 00:54:41.134431 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.134436 | orchestrator | 2026-03-30 00:54:41.134441 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-30 00:54:41.134446 | orchestrator | Monday 30 March 2026 00:49:51 +0000 (0:00:00.753) 0:05:17.347 ********** 2026-03-30 00:54:41.134451 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.134456 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.134461 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.134466 | orchestrator | 2026-03-30 00:54:41.134471 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-30 00:54:41.134476 | orchestrator | Monday 30 March 2026 00:49:52 +0000 (0:00:00.916) 0:05:18.264 ********** 2026-03-30 00:54:41.134484 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134490 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134495 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134500 | orchestrator | 2026-03-30 00:54:41.134505 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-30 00:54:41.134513 | orchestrator | Monday 30 March 2026 00:49:53 +0000 (0:00:00.316) 0:05:18.581 ********** 2026-03-30 00:54:41.134518 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 00:54:41.134524 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 00:54:41.134529 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 00:54:41.134534 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-30 00:54:41.134539 | orchestrator | 2026-03-30 00:54:41.134544 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-30 00:54:41.134549 | orchestrator | Monday 30 March 2026 00:50:01 +0000 (0:00:08.325) 0:05:26.907 ********** 2026-03-30 00:54:41.134554 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.134559 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.134564 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134569 | orchestrator | 2026-03-30 00:54:41.134575 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-30 00:54:41.134590 | orchestrator | Monday 30 March 2026 00:50:02 +0000 (0:00:00.638) 0:05:27.545 ********** 2026-03-30 00:54:41.134596 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-30 00:54:41.134601 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-30 00:54:41.134606 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-30 00:54:41.134611 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-30 00:54:41.134617 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.134638 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.134644 | orchestrator | 2026-03-30 00:54:41.134649 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-30 00:54:41.134654 | orchestrator | Monday 30 March 2026 00:50:04 +0000 (0:00:01.830) 0:05:29.376 ********** 2026-03-30 00:54:41.134659 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-30 00:54:41.134664 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-30 00:54:41.134669 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-30 00:54:41.134674 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 00:54:41.134679 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-30 00:54:41.134684 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-30 00:54:41.134689 | orchestrator | 2026-03-30 00:54:41.134694 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-30 00:54:41.134700 | orchestrator | Monday 30 March 2026 00:50:05 +0000 (0:00:01.232) 0:05:30.608 ********** 2026-03-30 00:54:41.134705 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.134710 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.134715 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.134720 | orchestrator | 2026-03-30 00:54:41.134725 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-30 00:54:41.134730 | orchestrator | Monday 30 March 2026 00:50:06 +0000 (0:00:00.822) 0:05:31.431 ********** 2026-03-30 00:54:41.134735 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134740 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134745 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134750 | orchestrator | 2026-03-30 00:54:41.134755 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-30 00:54:41.134761 | orchestrator | Monday 30 March 2026 00:50:06 +0000 (0:00:00.656) 0:05:32.087 ********** 2026-03-30 00:54:41.134766 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134771 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134776 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134781 | orchestrator | 2026-03-30 00:54:41.134786 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-30 00:54:41.134791 | orchestrator | Monday 30 March 2026 00:50:07 +0000 (0:00:00.374) 0:05:32.461 ********** 2026-03-30 00:54:41.134796 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.134805 | orchestrator | 2026-03-30 00:54:41.134810 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-30 00:54:41.134815 | orchestrator | Monday 30 March 2026 00:50:07 +0000 (0:00:00.545) 0:05:33.007 ********** 2026-03-30 00:54:41.134820 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134825 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134830 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134835 | orchestrator | 2026-03-30 00:54:41.134840 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-30 00:54:41.134845 | orchestrator | Monday 30 March 2026 00:50:08 +0000 (0:00:00.389) 0:05:33.396 ********** 2026-03-30 00:54:41.134850 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.134855 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.134860 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.134865 | orchestrator | 2026-03-30 00:54:41.134870 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-30 00:54:41.134875 | orchestrator | Monday 30 March 2026 00:50:08 +0000 (0:00:00.679) 0:05:34.075 ********** 2026-03-30 00:54:41.134880 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.134886 | orchestrator | 2026-03-30 00:54:41.134891 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-30 00:54:41.134896 | orchestrator | Monday 30 March 2026 00:50:09 +0000 (0:00:00.537) 0:05:34.613 ********** 2026-03-30 00:54:41.134901 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.134907 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.134916 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.134925 | orchestrator | 2026-03-30 00:54:41.134933 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-30 00:54:41.134945 | orchestrator | Monday 30 March 2026 00:50:10 +0000 (0:00:01.218) 0:05:35.831 ********** 2026-03-30 00:54:41.134953 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.134962 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.134971 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.134979 | orchestrator | 2026-03-30 00:54:41.134987 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-30 00:54:41.134996 | orchestrator | Monday 30 March 2026 00:50:11 +0000 (0:00:01.451) 0:05:37.283 ********** 2026-03-30 00:54:41.135005 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.135014 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.135020 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.135025 | orchestrator | 2026-03-30 00:54:41.135030 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-30 00:54:41.135036 | orchestrator | Monday 30 March 2026 00:50:13 +0000 (0:00:01.773) 0:05:39.056 ********** 2026-03-30 00:54:41.135041 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.135046 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.135051 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.135056 | orchestrator | 2026-03-30 00:54:41.135061 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-30 00:54:41.135066 | orchestrator | Monday 30 March 2026 00:50:15 +0000 (0:00:02.096) 0:05:41.152 ********** 2026-03-30 00:54:41.135071 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.135077 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.135082 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-30 00:54:41.135087 | orchestrator | 2026-03-30 00:54:41.135092 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-30 00:54:41.135097 | orchestrator | Monday 30 March 2026 00:50:16 +0000 (0:00:00.421) 0:05:41.574 ********** 2026-03-30 00:54:41.135120 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-30 00:54:41.135126 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-30 00:54:41.135136 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.135141 | orchestrator | 2026-03-30 00:54:41.135146 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-30 00:54:41.135151 | orchestrator | Monday 30 March 2026 00:50:29 +0000 (0:00:13.155) 0:05:54.729 ********** 2026-03-30 00:54:41.135156 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.135161 | orchestrator | 2026-03-30 00:54:41.135166 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-30 00:54:41.135171 | orchestrator | Monday 30 March 2026 00:50:30 +0000 (0:00:01.301) 0:05:56.031 ********** 2026-03-30 00:54:41.135176 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.135181 | orchestrator | 2026-03-30 00:54:41.135186 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-30 00:54:41.135192 | orchestrator | Monday 30 March 2026 00:50:30 +0000 (0:00:00.326) 0:05:56.358 ********** 2026-03-30 00:54:41.135197 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.135202 | orchestrator | 2026-03-30 00:54:41.135207 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-30 00:54:41.135212 | orchestrator | Monday 30 March 2026 00:50:31 +0000 (0:00:00.142) 0:05:56.500 ********** 2026-03-30 00:54:41.135217 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-30 00:54:41.135222 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-30 00:54:41.135227 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-30 00:54:41.135232 | orchestrator | 2026-03-30 00:54:41.135237 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-30 00:54:41.135242 | orchestrator | Monday 30 March 2026 00:50:37 +0000 (0:00:05.878) 0:06:02.379 ********** 2026-03-30 00:54:41.135247 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-30 00:54:41.135252 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-30 00:54:41.135257 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-30 00:54:41.135262 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-30 00:54:41.135268 | orchestrator | 2026-03-30 00:54:41.135273 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-30 00:54:41.135278 | orchestrator | Monday 30 March 2026 00:50:41 +0000 (0:00:04.493) 0:06:06.872 ********** 2026-03-30 00:54:41.135283 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.135288 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.135293 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.135298 | orchestrator | 2026-03-30 00:54:41.135303 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-30 00:54:41.135308 | orchestrator | Monday 30 March 2026 00:50:42 +0000 (0:00:00.948) 0:06:07.821 ********** 2026-03-30 00:54:41.135313 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.135318 | orchestrator | 2026-03-30 00:54:41.135323 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-30 00:54:41.135328 | orchestrator | Monday 30 March 2026 00:50:42 +0000 (0:00:00.536) 0:06:08.357 ********** 2026-03-30 00:54:41.135333 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.135338 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.135343 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.135349 | orchestrator | 2026-03-30 00:54:41.135354 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-30 00:54:41.135359 | orchestrator | Monday 30 March 2026 00:50:43 +0000 (0:00:00.306) 0:06:08.663 ********** 2026-03-30 00:54:41.135364 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.135369 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.135374 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.135382 | orchestrator | 2026-03-30 00:54:41.135390 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-30 00:54:41.135396 | orchestrator | Monday 30 March 2026 00:50:44 +0000 (0:00:01.463) 0:06:10.127 ********** 2026-03-30 00:54:41.135401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-30 00:54:41.135406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-30 00:54:41.135411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-30 00:54:41.135416 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.135421 | orchestrator | 2026-03-30 00:54:41.135426 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-30 00:54:41.135431 | orchestrator | Monday 30 March 2026 00:50:45 +0000 (0:00:00.576) 0:06:10.704 ********** 2026-03-30 00:54:41.135437 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.135442 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.135447 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.135452 | orchestrator | 2026-03-30 00:54:41.135457 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-30 00:54:41.135462 | orchestrator | 2026-03-30 00:54:41.135467 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.135472 | orchestrator | Monday 30 March 2026 00:50:45 +0000 (0:00:00.498) 0:06:11.203 ********** 2026-03-30 00:54:41.135477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.135482 | orchestrator | 2026-03-30 00:54:41.135487 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.135492 | orchestrator | Monday 30 March 2026 00:50:46 +0000 (0:00:00.581) 0:06:11.785 ********** 2026-03-30 00:54:41.135513 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.135519 | orchestrator | 2026-03-30 00:54:41.135524 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.135529 | orchestrator | Monday 30 March 2026 00:50:46 +0000 (0:00:00.461) 0:06:12.246 ********** 2026-03-30 00:54:41.135535 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.135540 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.135545 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.135550 | orchestrator | 2026-03-30 00:54:41.135555 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.135560 | orchestrator | Monday 30 March 2026 00:50:47 +0000 (0:00:00.269) 0:06:12.515 ********** 2026-03-30 00:54:41.135565 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135570 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135575 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135609 | orchestrator | 2026-03-30 00:54:41.135614 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.135619 | orchestrator | Monday 30 March 2026 00:50:48 +0000 (0:00:00.854) 0:06:13.370 ********** 2026-03-30 00:54:41.135624 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135630 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135635 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135640 | orchestrator | 2026-03-30 00:54:41.135645 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.135650 | orchestrator | Monday 30 March 2026 00:50:48 +0000 (0:00:00.675) 0:06:14.046 ********** 2026-03-30 00:54:41.135655 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135660 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135665 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135670 | orchestrator | 2026-03-30 00:54:41.135676 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.135681 | orchestrator | Monday 30 March 2026 00:50:49 +0000 (0:00:00.696) 0:06:14.742 ********** 2026-03-30 00:54:41.135686 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.135695 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.135700 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.135705 | orchestrator | 2026-03-30 00:54:41.135711 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.135716 | orchestrator | Monday 30 March 2026 00:50:49 +0000 (0:00:00.256) 0:06:14.998 ********** 2026-03-30 00:54:41.135721 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.135726 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.135731 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.135736 | orchestrator | 2026-03-30 00:54:41.135741 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.135746 | orchestrator | Monday 30 March 2026 00:50:50 +0000 (0:00:00.432) 0:06:15.431 ********** 2026-03-30 00:54:41.135751 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.135757 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.135762 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.135767 | orchestrator | 2026-03-30 00:54:41.135772 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.135777 | orchestrator | Monday 30 March 2026 00:50:50 +0000 (0:00:00.297) 0:06:15.728 ********** 2026-03-30 00:54:41.135782 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135787 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135792 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135797 | orchestrator | 2026-03-30 00:54:41.135803 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.135808 | orchestrator | Monday 30 March 2026 00:50:51 +0000 (0:00:00.685) 0:06:16.414 ********** 2026-03-30 00:54:41.135813 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135818 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135823 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135828 | orchestrator | 2026-03-30 00:54:41.135833 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.135838 | orchestrator | Monday 30 March 2026 00:50:51 +0000 (0:00:00.734) 0:06:17.148 ********** 2026-03-30 00:54:41.135843 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.135849 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.135854 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.135859 | orchestrator | 2026-03-30 00:54:41.135864 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.135872 | orchestrator | Monday 30 March 2026 00:50:52 +0000 (0:00:00.426) 0:06:17.575 ********** 2026-03-30 00:54:41.135877 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.135882 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.135887 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.135892 | orchestrator | 2026-03-30 00:54:41.135898 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.135903 | orchestrator | Monday 30 March 2026 00:50:52 +0000 (0:00:00.266) 0:06:17.841 ********** 2026-03-30 00:54:41.135908 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135913 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135918 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135923 | orchestrator | 2026-03-30 00:54:41.135928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.135933 | orchestrator | Monday 30 March 2026 00:50:52 +0000 (0:00:00.303) 0:06:18.144 ********** 2026-03-30 00:54:41.135938 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135943 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135949 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135954 | orchestrator | 2026-03-30 00:54:41.135959 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.135964 | orchestrator | Monday 30 March 2026 00:50:53 +0000 (0:00:00.286) 0:06:18.431 ********** 2026-03-30 00:54:41.135969 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.135974 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.135979 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.135987 | orchestrator | 2026-03-30 00:54:41.135992 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.135998 | orchestrator | Monday 30 March 2026 00:50:53 +0000 (0:00:00.460) 0:06:18.891 ********** 2026-03-30 00:54:41.136003 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136008 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136013 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136018 | orchestrator | 2026-03-30 00:54:41.136026 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.136031 | orchestrator | Monday 30 March 2026 00:50:53 +0000 (0:00:00.268) 0:06:19.160 ********** 2026-03-30 00:54:41.136037 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136042 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136047 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136052 | orchestrator | 2026-03-30 00:54:41.136057 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.136062 | orchestrator | Monday 30 March 2026 00:50:54 +0000 (0:00:00.266) 0:06:19.427 ********** 2026-03-30 00:54:41.136067 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136073 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136078 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136083 | orchestrator | 2026-03-30 00:54:41.136088 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.136093 | orchestrator | Monday 30 March 2026 00:50:54 +0000 (0:00:00.282) 0:06:19.709 ********** 2026-03-30 00:54:41.136098 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136103 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136108 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136113 | orchestrator | 2026-03-30 00:54:41.136118 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.136124 | orchestrator | Monday 30 March 2026 00:50:54 +0000 (0:00:00.440) 0:06:20.149 ********** 2026-03-30 00:54:41.136129 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136134 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136139 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136144 | orchestrator | 2026-03-30 00:54:41.136149 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-30 00:54:41.136154 | orchestrator | Monday 30 March 2026 00:50:55 +0000 (0:00:00.455) 0:06:20.605 ********** 2026-03-30 00:54:41.136159 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136164 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136169 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136174 | orchestrator | 2026-03-30 00:54:41.136179 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-30 00:54:41.136184 | orchestrator | Monday 30 March 2026 00:50:55 +0000 (0:00:00.281) 0:06:20.887 ********** 2026-03-30 00:54:41.136189 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:54:41.136194 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:54:41.136199 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:54:41.136205 | orchestrator | 2026-03-30 00:54:41.136210 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-30 00:54:41.136215 | orchestrator | Monday 30 March 2026 00:50:56 +0000 (0:00:00.702) 0:06:21.589 ********** 2026-03-30 00:54:41.136220 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.136225 | orchestrator | 2026-03-30 00:54:41.136230 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-30 00:54:41.136235 | orchestrator | Monday 30 March 2026 00:50:56 +0000 (0:00:00.599) 0:06:22.188 ********** 2026-03-30 00:54:41.136240 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136245 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136251 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136258 | orchestrator | 2026-03-30 00:54:41.136263 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-30 00:54:41.136269 | orchestrator | Monday 30 March 2026 00:50:57 +0000 (0:00:00.258) 0:06:22.447 ********** 2026-03-30 00:54:41.136274 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136279 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136284 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136289 | orchestrator | 2026-03-30 00:54:41.136294 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-30 00:54:41.136299 | orchestrator | Monday 30 March 2026 00:50:57 +0000 (0:00:00.249) 0:06:22.696 ********** 2026-03-30 00:54:41.136304 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136309 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136314 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136319 | orchestrator | 2026-03-30 00:54:41.136330 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-30 00:54:41.136336 | orchestrator | Monday 30 March 2026 00:50:58 +0000 (0:00:00.832) 0:06:23.529 ********** 2026-03-30 00:54:41.136341 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136346 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136351 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136356 | orchestrator | 2026-03-30 00:54:41.136361 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-30 00:54:41.136366 | orchestrator | Monday 30 March 2026 00:50:58 +0000 (0:00:00.339) 0:06:23.868 ********** 2026-03-30 00:54:41.136371 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-30 00:54:41.136376 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-30 00:54:41.136381 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-30 00:54:41.136387 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-30 00:54:41.136392 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-30 00:54:41.136397 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-30 00:54:41.136402 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-30 00:54:41.136407 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-30 00:54:41.136416 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-30 00:54:41.136421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-30 00:54:41.136426 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-30 00:54:41.136431 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-30 00:54:41.136437 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-30 00:54:41.136442 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-30 00:54:41.136447 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-30 00:54:41.136452 | orchestrator | 2026-03-30 00:54:41.136457 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-30 00:54:41.136462 | orchestrator | Monday 30 March 2026 00:51:00 +0000 (0:00:02.266) 0:06:26.135 ********** 2026-03-30 00:54:41.136467 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136472 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136477 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136482 | orchestrator | 2026-03-30 00:54:41.136487 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-30 00:54:41.136492 | orchestrator | Monday 30 March 2026 00:51:01 +0000 (0:00:00.298) 0:06:26.433 ********** 2026-03-30 00:54:41.136501 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.136506 | orchestrator | 2026-03-30 00:54:41.136511 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-30 00:54:41.136516 | orchestrator | Monday 30 March 2026 00:51:01 +0000 (0:00:00.640) 0:06:27.074 ********** 2026-03-30 00:54:41.136521 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-30 00:54:41.136526 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-30 00:54:41.136531 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-30 00:54:41.136536 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-30 00:54:41.136542 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-30 00:54:41.136547 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-30 00:54:41.136552 | orchestrator | 2026-03-30 00:54:41.136557 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-30 00:54:41.136562 | orchestrator | Monday 30 March 2026 00:51:02 +0000 (0:00:01.164) 0:06:28.238 ********** 2026-03-30 00:54:41.136567 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.136573 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-30 00:54:41.136586 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:54:41.136591 | orchestrator | 2026-03-30 00:54:41.136597 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-30 00:54:41.136602 | orchestrator | Monday 30 March 2026 00:51:04 +0000 (0:00:01.767) 0:06:30.006 ********** 2026-03-30 00:54:41.136607 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 00:54:41.136612 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-30 00:54:41.136617 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.136622 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 00:54:41.136627 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-30 00:54:41.136632 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.136637 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 00:54:41.136642 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-30 00:54:41.136648 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.136653 | orchestrator | 2026-03-30 00:54:41.136658 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-30 00:54:41.136663 | orchestrator | Monday 30 March 2026 00:51:06 +0000 (0:00:01.393) 0:06:31.400 ********** 2026-03-30 00:54:41.136671 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.136676 | orchestrator | 2026-03-30 00:54:41.136681 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-30 00:54:41.136686 | orchestrator | Monday 30 March 2026 00:51:07 +0000 (0:00:01.668) 0:06:33.068 ********** 2026-03-30 00:54:41.136691 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.136696 | orchestrator | 2026-03-30 00:54:41.136701 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-30 00:54:41.136707 | orchestrator | Monday 30 March 2026 00:51:08 +0000 (0:00:00.507) 0:06:33.575 ********** 2026-03-30 00:54:41.136712 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f', 'data_vg': 'ceph-6dc98b08-79a1-56b1-a9a0-4cf05631fa6f'}) 2026-03-30 00:54:41.136717 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8f4fd2da-a001-5de7-aa88-1349b3eb3c17', 'data_vg': 'ceph-8f4fd2da-a001-5de7-aa88-1349b3eb3c17'}) 2026-03-30 00:54:41.136723 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3e5d1498-d7a5-5a93-a004-d1785e71aab2', 'data_vg': 'ceph-3e5d1498-d7a5-5a93-a004-d1785e71aab2'}) 2026-03-30 00:54:41.136731 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-deb01b05-78a2-5c26-94fe-c042bb294237', 'data_vg': 'ceph-deb01b05-78a2-5c26-94fe-c042bb294237'}) 2026-03-30 00:54:41.136752 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5c90778-4ce0-5f2b-bfca-518c358a14f4', 'data_vg': 'ceph-b5c90778-4ce0-5f2b-bfca-518c358a14f4'}) 2026-03-30 00:54:41.136764 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ae410091-a002-50e8-b50c-29c9b1a933c3', 'data_vg': 'ceph-ae410091-a002-50e8-b50c-29c9b1a933c3'}) 2026-03-30 00:54:41.136772 | orchestrator | 2026-03-30 00:54:41.136780 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-30 00:54:41.136788 | orchestrator | Monday 30 March 2026 00:51:44 +0000 (0:00:36.746) 0:07:10.322 ********** 2026-03-30 00:54:41.136796 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.136804 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.136811 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.136819 | orchestrator | 2026-03-30 00:54:41.136828 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-30 00:54:41.136836 | orchestrator | Monday 30 March 2026 00:51:45 +0000 (0:00:00.421) 0:07:10.743 ********** 2026-03-30 00:54:41.136845 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.136853 | orchestrator | 2026-03-30 00:54:41.136862 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-30 00:54:41.136870 | orchestrator | Monday 30 March 2026 00:51:45 +0000 (0:00:00.453) 0:07:11.196 ********** 2026-03-30 00:54:41.136878 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136887 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136896 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136904 | orchestrator | 2026-03-30 00:54:41.136919 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-30 00:54:41.136928 | orchestrator | Monday 30 March 2026 00:51:46 +0000 (0:00:00.630) 0:07:11.827 ********** 2026-03-30 00:54:41.136935 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.136944 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.136952 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.136960 | orchestrator | 2026-03-30 00:54:41.136968 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-30 00:54:41.136977 | orchestrator | Monday 30 March 2026 00:51:49 +0000 (0:00:02.678) 0:07:14.506 ********** 2026-03-30 00:54:41.136986 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.136995 | orchestrator | 2026-03-30 00:54:41.137005 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-30 00:54:41.137011 | orchestrator | Monday 30 March 2026 00:51:49 +0000 (0:00:00.471) 0:07:14.978 ********** 2026-03-30 00:54:41.137016 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.137021 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.137026 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.137031 | orchestrator | 2026-03-30 00:54:41.137036 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-30 00:54:41.137042 | orchestrator | Monday 30 March 2026 00:51:50 +0000 (0:00:01.346) 0:07:16.324 ********** 2026-03-30 00:54:41.137047 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.137052 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.137057 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.137062 | orchestrator | 2026-03-30 00:54:41.137069 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-30 00:54:41.137081 | orchestrator | Monday 30 March 2026 00:51:52 +0000 (0:00:01.288) 0:07:17.613 ********** 2026-03-30 00:54:41.137090 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.137098 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.137106 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.137113 | orchestrator | 2026-03-30 00:54:41.137122 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-30 00:54:41.137138 | orchestrator | Monday 30 March 2026 00:51:54 +0000 (0:00:01.801) 0:07:19.415 ********** 2026-03-30 00:54:41.137146 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137155 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137164 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.137173 | orchestrator | 2026-03-30 00:54:41.137182 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-30 00:54:41.137190 | orchestrator | Monday 30 March 2026 00:51:54 +0000 (0:00:00.259) 0:07:19.674 ********** 2026-03-30 00:54:41.137198 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137212 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137221 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.137230 | orchestrator | 2026-03-30 00:54:41.137239 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-30 00:54:41.137248 | orchestrator | Monday 30 March 2026 00:51:54 +0000 (0:00:00.266) 0:07:19.941 ********** 2026-03-30 00:54:41.137257 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-30 00:54:41.137266 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-30 00:54:41.137274 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-30 00:54:41.137282 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-30 00:54:41.137288 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-30 00:54:41.137293 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-30 00:54:41.137298 | orchestrator | 2026-03-30 00:54:41.137304 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-30 00:54:41.137309 | orchestrator | Monday 30 March 2026 00:51:55 +0000 (0:00:01.068) 0:07:21.010 ********** 2026-03-30 00:54:41.137315 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-30 00:54:41.137320 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-30 00:54:41.137326 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-30 00:54:41.137331 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-30 00:54:41.137336 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-30 00:54:41.137341 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-30 00:54:41.137347 | orchestrator | 2026-03-30 00:54:41.137352 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-30 00:54:41.137358 | orchestrator | Monday 30 March 2026 00:51:57 +0000 (0:00:01.833) 0:07:22.844 ********** 2026-03-30 00:54:41.137363 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-30 00:54:41.137368 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-30 00:54:41.137381 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-30 00:54:41.137387 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-30 00:54:41.137392 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-30 00:54:41.137397 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-30 00:54:41.137403 | orchestrator | 2026-03-30 00:54:41.137408 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-30 00:54:41.137413 | orchestrator | Monday 30 March 2026 00:52:00 +0000 (0:00:03.163) 0:07:26.007 ********** 2026-03-30 00:54:41.137419 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137424 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.137435 | orchestrator | 2026-03-30 00:54:41.137441 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-30 00:54:41.137446 | orchestrator | Monday 30 March 2026 00:52:03 +0000 (0:00:02.641) 0:07:28.648 ********** 2026-03-30 00:54:41.137451 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137457 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137462 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-30 00:54:41.137468 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.137473 | orchestrator | 2026-03-30 00:54:41.137478 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-30 00:54:41.137489 | orchestrator | Monday 30 March 2026 00:52:16 +0000 (0:00:12.936) 0:07:41.585 ********** 2026-03-30 00:54:41.137494 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137500 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137505 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.137510 | orchestrator | 2026-03-30 00:54:41.137516 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-30 00:54:41.137521 | orchestrator | Monday 30 March 2026 00:52:17 +0000 (0:00:00.792) 0:07:42.378 ********** 2026-03-30 00:54:41.137527 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137537 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137545 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.137554 | orchestrator | 2026-03-30 00:54:41.137563 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-30 00:54:41.137573 | orchestrator | Monday 30 March 2026 00:52:17 +0000 (0:00:00.485) 0:07:42.864 ********** 2026-03-30 00:54:41.137599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.137609 | orchestrator | 2026-03-30 00:54:41.137618 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-30 00:54:41.137627 | orchestrator | Monday 30 March 2026 00:52:17 +0000 (0:00:00.463) 0:07:43.327 ********** 2026-03-30 00:54:41.137636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.137646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.137655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.137663 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137672 | orchestrator | 2026-03-30 00:54:41.137681 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-30 00:54:41.137689 | orchestrator | Monday 30 March 2026 00:52:18 +0000 (0:00:00.372) 0:07:43.700 ********** 2026-03-30 00:54:41.137698 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137706 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137714 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.137722 | orchestrator | 2026-03-30 00:54:41.137730 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-30 00:54:41.137738 | orchestrator | Monday 30 March 2026 00:52:18 +0000 (0:00:00.308) 0:07:44.008 ********** 2026-03-30 00:54:41.137747 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137755 | orchestrator | 2026-03-30 00:54:41.137764 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-30 00:54:41.137773 | orchestrator | Monday 30 March 2026 00:52:18 +0000 (0:00:00.175) 0:07:44.183 ********** 2026-03-30 00:54:41.137781 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137790 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.137805 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.137814 | orchestrator | 2026-03-30 00:54:41.137822 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-30 00:54:41.137831 | orchestrator | Monday 30 March 2026 00:52:19 +0000 (0:00:00.444) 0:07:44.628 ********** 2026-03-30 00:54:41.137839 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137847 | orchestrator | 2026-03-30 00:54:41.137856 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-30 00:54:41.137864 | orchestrator | Monday 30 March 2026 00:52:19 +0000 (0:00:00.201) 0:07:44.830 ********** 2026-03-30 00:54:41.137873 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137881 | orchestrator | 2026-03-30 00:54:41.137890 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-30 00:54:41.137898 | orchestrator | Monday 30 March 2026 00:52:19 +0000 (0:00:00.200) 0:07:45.031 ********** 2026-03-30 00:54:41.137907 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137916 | orchestrator | 2026-03-30 00:54:41.137925 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-30 00:54:41.137943 | orchestrator | Monday 30 March 2026 00:52:19 +0000 (0:00:00.115) 0:07:45.146 ********** 2026-03-30 00:54:41.137952 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137962 | orchestrator | 2026-03-30 00:54:41.137971 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-30 00:54:41.137982 | orchestrator | Monday 30 March 2026 00:52:19 +0000 (0:00:00.188) 0:07:45.335 ********** 2026-03-30 00:54:41.137988 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.137993 | orchestrator | 2026-03-30 00:54:41.137998 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-30 00:54:41.138004 | orchestrator | Monday 30 March 2026 00:52:20 +0000 (0:00:00.203) 0:07:45.538 ********** 2026-03-30 00:54:41.138055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.138063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.138068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.138074 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138079 | orchestrator | 2026-03-30 00:54:41.138085 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-30 00:54:41.138090 | orchestrator | Monday 30 March 2026 00:52:20 +0000 (0:00:00.338) 0:07:45.876 ********** 2026-03-30 00:54:41.138095 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138101 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138106 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138112 | orchestrator | 2026-03-30 00:54:41.138117 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-30 00:54:41.138122 | orchestrator | Monday 30 March 2026 00:52:20 +0000 (0:00:00.269) 0:07:46.146 ********** 2026-03-30 00:54:41.138128 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138133 | orchestrator | 2026-03-30 00:54:41.138140 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-30 00:54:41.138149 | orchestrator | Monday 30 March 2026 00:52:21 +0000 (0:00:00.561) 0:07:46.707 ********** 2026-03-30 00:54:41.138158 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138166 | orchestrator | 2026-03-30 00:54:41.138175 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-30 00:54:41.138184 | orchestrator | 2026-03-30 00:54:41.138192 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.138201 | orchestrator | Monday 30 March 2026 00:52:21 +0000 (0:00:00.602) 0:07:47.310 ********** 2026-03-30 00:54:41.138211 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.138221 | orchestrator | 2026-03-30 00:54:41.138229 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.138239 | orchestrator | Monday 30 March 2026 00:52:22 +0000 (0:00:00.988) 0:07:48.298 ********** 2026-03-30 00:54:41.138247 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.138257 | orchestrator | 2026-03-30 00:54:41.138266 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.138275 | orchestrator | Monday 30 March 2026 00:52:23 +0000 (0:00:00.999) 0:07:49.297 ********** 2026-03-30 00:54:41.138284 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138294 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138302 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138307 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.138313 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.138318 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.138323 | orchestrator | 2026-03-30 00:54:41.138329 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.138334 | orchestrator | Monday 30 March 2026 00:52:25 +0000 (0:00:01.113) 0:07:50.411 ********** 2026-03-30 00:54:41.138345 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138351 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138356 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138361 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138367 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138372 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138377 | orchestrator | 2026-03-30 00:54:41.138383 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.138388 | orchestrator | Monday 30 March 2026 00:52:25 +0000 (0:00:00.716) 0:07:51.127 ********** 2026-03-30 00:54:41.138393 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138399 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138404 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138410 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138415 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138420 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138425 | orchestrator | 2026-03-30 00:54:41.138431 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.138440 | orchestrator | Monday 30 March 2026 00:52:26 +0000 (0:00:00.818) 0:07:51.945 ********** 2026-03-30 00:54:41.138445 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138451 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138456 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138462 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138467 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138472 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138477 | orchestrator | 2026-03-30 00:54:41.138483 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.138488 | orchestrator | Monday 30 March 2026 00:52:27 +0000 (0:00:00.797) 0:07:52.743 ********** 2026-03-30 00:54:41.138493 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138499 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138504 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138509 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.138515 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.138520 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.138525 | orchestrator | 2026-03-30 00:54:41.138530 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.138536 | orchestrator | Monday 30 March 2026 00:52:28 +0000 (0:00:00.906) 0:07:53.650 ********** 2026-03-30 00:54:41.138541 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138547 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138552 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138557 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138562 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138568 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138573 | orchestrator | 2026-03-30 00:54:41.138592 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.138598 | orchestrator | Monday 30 March 2026 00:52:28 +0000 (0:00:00.702) 0:07:54.352 ********** 2026-03-30 00:54:41.138603 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138614 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138619 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138625 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138630 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138635 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138641 | orchestrator | 2026-03-30 00:54:41.138646 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.138652 | orchestrator | Monday 30 March 2026 00:52:29 +0000 (0:00:00.515) 0:07:54.867 ********** 2026-03-30 00:54:41.138657 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138662 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138668 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138673 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.138682 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.138687 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.138692 | orchestrator | 2026-03-30 00:54:41.138698 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.138703 | orchestrator | Monday 30 March 2026 00:52:30 +0000 (0:00:01.247) 0:07:56.115 ********** 2026-03-30 00:54:41.138709 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138714 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138719 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138725 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.138730 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.138735 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.138741 | orchestrator | 2026-03-30 00:54:41.138746 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.138752 | orchestrator | Monday 30 March 2026 00:52:31 +0000 (0:00:00.879) 0:07:56.994 ********** 2026-03-30 00:54:41.138757 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138763 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138768 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138773 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138779 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138784 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138789 | orchestrator | 2026-03-30 00:54:41.138795 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.138800 | orchestrator | Monday 30 March 2026 00:52:32 +0000 (0:00:00.714) 0:07:57.708 ********** 2026-03-30 00:54:41.138806 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.138811 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.138816 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.138822 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.138827 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.138832 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.138838 | orchestrator | 2026-03-30 00:54:41.138843 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.138848 | orchestrator | Monday 30 March 2026 00:52:33 +0000 (0:00:00.653) 0:07:58.362 ********** 2026-03-30 00:54:41.138854 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138859 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138864 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138870 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138875 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138880 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138886 | orchestrator | 2026-03-30 00:54:41.138891 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.138897 | orchestrator | Monday 30 March 2026 00:52:33 +0000 (0:00:00.803) 0:07:59.166 ********** 2026-03-30 00:54:41.138902 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138907 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138913 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138918 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138924 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138929 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138934 | orchestrator | 2026-03-30 00:54:41.138940 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.138945 | orchestrator | Monday 30 March 2026 00:52:34 +0000 (0:00:00.530) 0:07:59.696 ********** 2026-03-30 00:54:41.138951 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.138956 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.138961 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.138967 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.138972 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.138978 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.138983 | orchestrator | 2026-03-30 00:54:41.138991 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.138999 | orchestrator | Monday 30 March 2026 00:52:35 +0000 (0:00:00.672) 0:08:00.369 ********** 2026-03-30 00:54:41.139005 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.139010 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.139015 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.139021 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.139026 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.139031 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.139037 | orchestrator | 2026-03-30 00:54:41.139042 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.139048 | orchestrator | Monday 30 March 2026 00:52:35 +0000 (0:00:00.500) 0:08:00.870 ********** 2026-03-30 00:54:41.139053 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.139058 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.139064 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.139069 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:54:41.139074 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:54:41.139080 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:54:41.139085 | orchestrator | 2026-03-30 00:54:41.139090 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.139096 | orchestrator | Monday 30 March 2026 00:52:36 +0000 (0:00:00.659) 0:08:01.530 ********** 2026-03-30 00:54:41.139101 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.139107 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.139112 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.139117 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.139123 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.139128 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.139134 | orchestrator | 2026-03-30 00:54:41.139139 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.139148 | orchestrator | Monday 30 March 2026 00:52:36 +0000 (0:00:00.522) 0:08:02.052 ********** 2026-03-30 00:54:41.139153 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.139159 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.139164 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.139169 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.139175 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.139180 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.139186 | orchestrator | 2026-03-30 00:54:41.139191 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.139196 | orchestrator | Monday 30 March 2026 00:52:37 +0000 (0:00:00.758) 0:08:02.811 ********** 2026-03-30 00:54:41.139202 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.139211 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.139220 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.139229 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.139238 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.139248 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.139257 | orchestrator | 2026-03-30 00:54:41.139265 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-30 00:54:41.139274 | orchestrator | Monday 30 March 2026 00:52:38 +0000 (0:00:01.175) 0:08:03.987 ********** 2026-03-30 00:54:41.139283 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.139291 | orchestrator | 2026-03-30 00:54:41.139300 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-30 00:54:41.139309 | orchestrator | Monday 30 March 2026 00:52:41 +0000 (0:00:03.036) 0:08:07.024 ********** 2026-03-30 00:54:41.139317 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.139325 | orchestrator | 2026-03-30 00:54:41.139333 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-30 00:54:41.139341 | orchestrator | Monday 30 March 2026 00:52:43 +0000 (0:00:01.781) 0:08:08.805 ********** 2026-03-30 00:54:41.139350 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.139358 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.139374 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.139382 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.139391 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.139400 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.139408 | orchestrator | 2026-03-30 00:54:41.139417 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-30 00:54:41.139426 | orchestrator | Monday 30 March 2026 00:52:44 +0000 (0:00:01.518) 0:08:10.324 ********** 2026-03-30 00:54:41.139435 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.139444 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.139454 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.139463 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.139472 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.139482 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.139488 | orchestrator | 2026-03-30 00:54:41.139494 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-30 00:54:41.139499 | orchestrator | Monday 30 March 2026 00:52:46 +0000 (0:00:01.077) 0:08:11.401 ********** 2026-03-30 00:54:41.139505 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.139511 | orchestrator | 2026-03-30 00:54:41.139517 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-30 00:54:41.139522 | orchestrator | Monday 30 March 2026 00:52:47 +0000 (0:00:01.014) 0:08:12.416 ********** 2026-03-30 00:54:41.139528 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.139533 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.139538 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.139544 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.139549 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.139554 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.139560 | orchestrator | 2026-03-30 00:54:41.139565 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-30 00:54:41.139570 | orchestrator | Monday 30 March 2026 00:52:48 +0000 (0:00:01.368) 0:08:13.785 ********** 2026-03-30 00:54:41.139576 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.139620 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.139626 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.139632 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.139637 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.139649 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.139655 | orchestrator | 2026-03-30 00:54:41.139660 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-30 00:54:41.139666 | orchestrator | Monday 30 March 2026 00:52:51 +0000 (0:00:03.442) 0:08:17.228 ********** 2026-03-30 00:54:41.139671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:54:41.139677 | orchestrator | 2026-03-30 00:54:41.139682 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-30 00:54:41.139688 | orchestrator | Monday 30 March 2026 00:52:52 +0000 (0:00:01.040) 0:08:18.269 ********** 2026-03-30 00:54:41.139693 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.139698 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.139704 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.139709 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.139714 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.139720 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.139725 | orchestrator | 2026-03-30 00:54:41.139730 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-30 00:54:41.139736 | orchestrator | Monday 30 March 2026 00:52:53 +0000 (0:00:00.575) 0:08:18.844 ********** 2026-03-30 00:54:41.139741 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.139747 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.139757 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.139762 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:54:41.139768 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:54:41.139773 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:54:41.139778 | orchestrator | 2026-03-30 00:54:41.139784 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-30 00:54:41.139798 | orchestrator | Monday 30 March 2026 00:52:56 +0000 (0:00:02.662) 0:08:21.507 ********** 2026-03-30 00:54:41.139807 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.139814 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.139821 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.139829 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:54:41.139837 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:54:41.139846 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:54:41.139853 | orchestrator | 2026-03-30 00:54:41.139861 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-30 00:54:41.139869 | orchestrator | 2026-03-30 00:54:41.139878 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.139887 | orchestrator | Monday 30 March 2026 00:52:56 +0000 (0:00:00.728) 0:08:22.235 ********** 2026-03-30 00:54:41.139895 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.139903 | orchestrator | 2026-03-30 00:54:41.139908 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.139913 | orchestrator | Monday 30 March 2026 00:52:57 +0000 (0:00:00.634) 0:08:22.870 ********** 2026-03-30 00:54:41.139918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.139922 | orchestrator | 2026-03-30 00:54:41.139927 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.139932 | orchestrator | Monday 30 March 2026 00:52:57 +0000 (0:00:00.438) 0:08:23.309 ********** 2026-03-30 00:54:41.139937 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.139941 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.139946 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.139951 | orchestrator | 2026-03-30 00:54:41.139956 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.139960 | orchestrator | Monday 30 March 2026 00:52:58 +0000 (0:00:00.422) 0:08:23.731 ********** 2026-03-30 00:54:41.139965 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.139970 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.139975 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.139979 | orchestrator | 2026-03-30 00:54:41.139987 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.139995 | orchestrator | Monday 30 March 2026 00:52:59 +0000 (0:00:00.675) 0:08:24.407 ********** 2026-03-30 00:54:41.140003 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140012 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140020 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140027 | orchestrator | 2026-03-30 00:54:41.140036 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.140044 | orchestrator | Monday 30 March 2026 00:52:59 +0000 (0:00:00.672) 0:08:25.080 ********** 2026-03-30 00:54:41.140052 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140060 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140068 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140076 | orchestrator | 2026-03-30 00:54:41.140085 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.140093 | orchestrator | Monday 30 March 2026 00:53:00 +0000 (0:00:00.707) 0:08:25.787 ********** 2026-03-30 00:54:41.140101 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140106 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140111 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140115 | orchestrator | 2026-03-30 00:54:41.140125 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.140130 | orchestrator | Monday 30 March 2026 00:53:00 +0000 (0:00:00.420) 0:08:26.207 ********** 2026-03-30 00:54:41.140134 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140140 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140148 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140156 | orchestrator | 2026-03-30 00:54:41.140164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.140171 | orchestrator | Monday 30 March 2026 00:53:01 +0000 (0:00:00.264) 0:08:26.472 ********** 2026-03-30 00:54:41.140180 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140188 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140196 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140204 | orchestrator | 2026-03-30 00:54:41.140212 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.140224 | orchestrator | Monday 30 March 2026 00:53:01 +0000 (0:00:00.270) 0:08:26.743 ********** 2026-03-30 00:54:41.140232 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140239 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140244 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140249 | orchestrator | 2026-03-30 00:54:41.140253 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.140258 | orchestrator | Monday 30 March 2026 00:53:02 +0000 (0:00:00.660) 0:08:27.403 ********** 2026-03-30 00:54:41.140263 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140268 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140273 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140277 | orchestrator | 2026-03-30 00:54:41.140282 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.140287 | orchestrator | Monday 30 March 2026 00:53:02 +0000 (0:00:00.884) 0:08:28.287 ********** 2026-03-30 00:54:41.140292 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140297 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140302 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140306 | orchestrator | 2026-03-30 00:54:41.140311 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.140316 | orchestrator | Monday 30 March 2026 00:53:03 +0000 (0:00:00.257) 0:08:28.545 ********** 2026-03-30 00:54:41.140321 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140326 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140330 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140335 | orchestrator | 2026-03-30 00:54:41.140340 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.140345 | orchestrator | Monday 30 March 2026 00:53:03 +0000 (0:00:00.273) 0:08:28.818 ********** 2026-03-30 00:54:41.140350 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140354 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140364 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140368 | orchestrator | 2026-03-30 00:54:41.140373 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.140378 | orchestrator | Monday 30 March 2026 00:53:03 +0000 (0:00:00.278) 0:08:29.096 ********** 2026-03-30 00:54:41.140383 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140388 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140393 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140397 | orchestrator | 2026-03-30 00:54:41.140402 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.140407 | orchestrator | Monday 30 March 2026 00:53:04 +0000 (0:00:00.465) 0:08:29.562 ********** 2026-03-30 00:54:41.140412 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140417 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140421 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140426 | orchestrator | 2026-03-30 00:54:41.140431 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.140436 | orchestrator | Monday 30 March 2026 00:53:04 +0000 (0:00:00.290) 0:08:29.853 ********** 2026-03-30 00:54:41.140444 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140449 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140454 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140459 | orchestrator | 2026-03-30 00:54:41.140464 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.140468 | orchestrator | Monday 30 March 2026 00:53:04 +0000 (0:00:00.258) 0:08:30.111 ********** 2026-03-30 00:54:41.140473 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140478 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140483 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140488 | orchestrator | 2026-03-30 00:54:41.140495 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.140503 | orchestrator | Monday 30 March 2026 00:53:05 +0000 (0:00:00.276) 0:08:30.387 ********** 2026-03-30 00:54:41.140511 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140519 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140528 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140536 | orchestrator | 2026-03-30 00:54:41.140545 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.140553 | orchestrator | Monday 30 March 2026 00:53:05 +0000 (0:00:00.436) 0:08:30.824 ********** 2026-03-30 00:54:41.140561 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140568 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140573 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140577 | orchestrator | 2026-03-30 00:54:41.140597 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.140602 | orchestrator | Monday 30 March 2026 00:53:05 +0000 (0:00:00.298) 0:08:31.122 ********** 2026-03-30 00:54:41.140606 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.140611 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.140616 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.140621 | orchestrator | 2026-03-30 00:54:41.140625 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-30 00:54:41.140632 | orchestrator | Monday 30 March 2026 00:53:06 +0000 (0:00:00.466) 0:08:31.589 ********** 2026-03-30 00:54:41.140640 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.140648 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.140656 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-30 00:54:41.140663 | orchestrator | 2026-03-30 00:54:41.140671 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-30 00:54:41.140678 | orchestrator | Monday 30 March 2026 00:53:06 +0000 (0:00:00.533) 0:08:32.122 ********** 2026-03-30 00:54:41.140685 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.140692 | orchestrator | 2026-03-30 00:54:41.140699 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-30 00:54:41.140707 | orchestrator | Monday 30 March 2026 00:53:08 +0000 (0:00:01.667) 0:08:33.790 ********** 2026-03-30 00:54:41.140716 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-30 00:54:41.140729 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.140737 | orchestrator | 2026-03-30 00:54:41.140745 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-30 00:54:41.140752 | orchestrator | Monday 30 March 2026 00:53:08 +0000 (0:00:00.210) 0:08:34.000 ********** 2026-03-30 00:54:41.140761 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:54:41.140773 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:54:41.140787 | orchestrator | 2026-03-30 00:54:41.140795 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-30 00:54:41.140802 | orchestrator | Monday 30 March 2026 00:53:14 +0000 (0:00:05.783) 0:08:39.783 ********** 2026-03-30 00:54:41.140810 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 00:54:41.140817 | orchestrator | 2026-03-30 00:54:41.140826 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-30 00:54:41.140833 | orchestrator | Monday 30 March 2026 00:53:17 +0000 (0:00:02.676) 0:08:42.460 ********** 2026-03-30 00:54:41.140846 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.140854 | orchestrator | 2026-03-30 00:54:41.140861 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-30 00:54:41.140869 | orchestrator | Monday 30 March 2026 00:53:17 +0000 (0:00:00.613) 0:08:43.073 ********** 2026-03-30 00:54:41.140876 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-30 00:54:41.140883 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-30 00:54:41.140890 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-30 00:54:41.140898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-30 00:54:41.140905 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-30 00:54:41.140913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-30 00:54:41.140920 | orchestrator | 2026-03-30 00:54:41.140928 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-30 00:54:41.140935 | orchestrator | Monday 30 March 2026 00:53:18 +0000 (0:00:01.047) 0:08:44.121 ********** 2026-03-30 00:54:41.140942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.140950 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-30 00:54:41.140958 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:54:41.140965 | orchestrator | 2026-03-30 00:54:41.140972 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-30 00:54:41.140980 | orchestrator | Monday 30 March 2026 00:53:20 +0000 (0:00:01.736) 0:08:45.858 ********** 2026-03-30 00:54:41.140988 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 00:54:41.140995 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-30 00:54:41.141003 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141011 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 00:54:41.141018 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-30 00:54:41.141025 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141032 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 00:54:41.141040 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-30 00:54:41.141047 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141055 | orchestrator | 2026-03-30 00:54:41.141062 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-30 00:54:41.141070 | orchestrator | Monday 30 March 2026 00:53:21 +0000 (0:00:01.305) 0:08:47.163 ********** 2026-03-30 00:54:41.141078 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141086 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141093 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141101 | orchestrator | 2026-03-30 00:54:41.141109 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-30 00:54:41.141116 | orchestrator | Monday 30 March 2026 00:53:23 +0000 (0:00:02.152) 0:08:49.316 ********** 2026-03-30 00:54:41.141123 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.141135 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.141142 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.141149 | orchestrator | 2026-03-30 00:54:41.141156 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-30 00:54:41.141163 | orchestrator | Monday 30 March 2026 00:53:24 +0000 (0:00:00.471) 0:08:49.787 ********** 2026-03-30 00:54:41.141170 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.141178 | orchestrator | 2026-03-30 00:54:41.141186 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-30 00:54:41.141194 | orchestrator | Monday 30 March 2026 00:53:24 +0000 (0:00:00.549) 0:08:50.337 ********** 2026-03-30 00:54:41.141202 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.141210 | orchestrator | 2026-03-30 00:54:41.141218 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-30 00:54:41.141231 | orchestrator | Monday 30 March 2026 00:53:25 +0000 (0:00:00.774) 0:08:51.112 ********** 2026-03-30 00:54:41.141240 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141248 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141255 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141263 | orchestrator | 2026-03-30 00:54:41.141272 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-30 00:54:41.141280 | orchestrator | Monday 30 March 2026 00:53:27 +0000 (0:00:01.319) 0:08:52.431 ********** 2026-03-30 00:54:41.141288 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141296 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141304 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141312 | orchestrator | 2026-03-30 00:54:41.141320 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-30 00:54:41.141328 | orchestrator | Monday 30 March 2026 00:53:28 +0000 (0:00:01.186) 0:08:53.618 ********** 2026-03-30 00:54:41.141336 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141344 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141352 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141360 | orchestrator | 2026-03-30 00:54:41.141368 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-30 00:54:41.141377 | orchestrator | Monday 30 March 2026 00:53:30 +0000 (0:00:01.828) 0:08:55.446 ********** 2026-03-30 00:54:41.141383 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141388 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141393 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141398 | orchestrator | 2026-03-30 00:54:41.141403 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-30 00:54:41.141407 | orchestrator | Monday 30 March 2026 00:53:32 +0000 (0:00:02.409) 0:08:57.855 ********** 2026-03-30 00:54:41.141412 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141417 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141422 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141427 | orchestrator | 2026-03-30 00:54:41.141438 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-30 00:54:41.141443 | orchestrator | Monday 30 March 2026 00:53:33 +0000 (0:00:01.327) 0:08:59.182 ********** 2026-03-30 00:54:41.141447 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141452 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141457 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141462 | orchestrator | 2026-03-30 00:54:41.141467 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-30 00:54:41.141471 | orchestrator | Monday 30 March 2026 00:53:35 +0000 (0:00:01.204) 0:09:00.387 ********** 2026-03-30 00:54:41.141476 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.141481 | orchestrator | 2026-03-30 00:54:41.141491 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-30 00:54:41.141496 | orchestrator | Monday 30 March 2026 00:53:35 +0000 (0:00:00.586) 0:09:00.973 ********** 2026-03-30 00:54:41.141501 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141505 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141510 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141515 | orchestrator | 2026-03-30 00:54:41.141520 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-30 00:54:41.141524 | orchestrator | Monday 30 March 2026 00:53:35 +0000 (0:00:00.292) 0:09:01.266 ********** 2026-03-30 00:54:41.141529 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.141534 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.141539 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.141544 | orchestrator | 2026-03-30 00:54:41.141548 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-30 00:54:41.141553 | orchestrator | Monday 30 March 2026 00:53:37 +0000 (0:00:01.863) 0:09:03.130 ********** 2026-03-30 00:54:41.141558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.141563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.141568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.141573 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.141578 | orchestrator | 2026-03-30 00:54:41.141617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-30 00:54:41.141622 | orchestrator | Monday 30 March 2026 00:53:38 +0000 (0:00:00.566) 0:09:03.697 ********** 2026-03-30 00:54:41.141627 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141631 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141636 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141641 | orchestrator | 2026-03-30 00:54:41.141646 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-30 00:54:41.141650 | orchestrator | 2026-03-30 00:54:41.141655 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-30 00:54:41.141660 | orchestrator | Monday 30 March 2026 00:53:39 +0000 (0:00:00.739) 0:09:04.437 ********** 2026-03-30 00:54:41.141665 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.141670 | orchestrator | 2026-03-30 00:54:41.141675 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-30 00:54:41.141680 | orchestrator | Monday 30 March 2026 00:53:39 +0000 (0:00:00.728) 0:09:05.165 ********** 2026-03-30 00:54:41.141684 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.141689 | orchestrator | 2026-03-30 00:54:41.141694 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-30 00:54:41.141699 | orchestrator | Monday 30 March 2026 00:53:40 +0000 (0:00:00.482) 0:09:05.648 ********** 2026-03-30 00:54:41.141703 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.141708 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.141713 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.141718 | orchestrator | 2026-03-30 00:54:41.141722 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-30 00:54:41.141727 | orchestrator | Monday 30 March 2026 00:53:40 +0000 (0:00:00.386) 0:09:06.035 ********** 2026-03-30 00:54:41.141735 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141740 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141745 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141750 | orchestrator | 2026-03-30 00:54:41.141754 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-30 00:54:41.141759 | orchestrator | Monday 30 March 2026 00:53:41 +0000 (0:00:00.636) 0:09:06.671 ********** 2026-03-30 00:54:41.141764 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141769 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141777 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141782 | orchestrator | 2026-03-30 00:54:41.141787 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-30 00:54:41.141792 | orchestrator | Monday 30 March 2026 00:53:42 +0000 (0:00:00.693) 0:09:07.365 ********** 2026-03-30 00:54:41.141796 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141801 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141806 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141810 | orchestrator | 2026-03-30 00:54:41.141815 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-30 00:54:41.141820 | orchestrator | Monday 30 March 2026 00:53:42 +0000 (0:00:00.638) 0:09:08.004 ********** 2026-03-30 00:54:41.141825 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.141829 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.141834 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.141839 | orchestrator | 2026-03-30 00:54:41.141844 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-30 00:54:41.141848 | orchestrator | Monday 30 March 2026 00:53:43 +0000 (0:00:00.851) 0:09:08.856 ********** 2026-03-30 00:54:41.141853 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.141858 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.141866 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.141874 | orchestrator | 2026-03-30 00:54:41.141882 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-30 00:54:41.141894 | orchestrator | Monday 30 March 2026 00:53:44 +0000 (0:00:00.526) 0:09:09.382 ********** 2026-03-30 00:54:41.141903 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.141911 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.141919 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.141927 | orchestrator | 2026-03-30 00:54:41.141935 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-30 00:54:41.141944 | orchestrator | Monday 30 March 2026 00:53:44 +0000 (0:00:00.283) 0:09:09.665 ********** 2026-03-30 00:54:41.141952 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141961 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141966 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141971 | orchestrator | 2026-03-30 00:54:41.141976 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-30 00:54:41.141980 | orchestrator | Monday 30 March 2026 00:53:44 +0000 (0:00:00.616) 0:09:10.282 ********** 2026-03-30 00:54:41.141985 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.141990 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.141995 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.141999 | orchestrator | 2026-03-30 00:54:41.142004 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-30 00:54:41.142009 | orchestrator | Monday 30 March 2026 00:53:45 +0000 (0:00:00.725) 0:09:11.007 ********** 2026-03-30 00:54:41.142037 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142043 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142047 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142052 | orchestrator | 2026-03-30 00:54:41.142057 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-30 00:54:41.142062 | orchestrator | Monday 30 March 2026 00:53:45 +0000 (0:00:00.258) 0:09:11.266 ********** 2026-03-30 00:54:41.142066 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142071 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142076 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142081 | orchestrator | 2026-03-30 00:54:41.142087 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-30 00:54:41.142095 | orchestrator | Monday 30 March 2026 00:53:46 +0000 (0:00:00.244) 0:09:11.510 ********** 2026-03-30 00:54:41.142103 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.142112 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.142120 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.142129 | orchestrator | 2026-03-30 00:54:41.142136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-30 00:54:41.142148 | orchestrator | Monday 30 March 2026 00:53:46 +0000 (0:00:00.358) 0:09:11.869 ********** 2026-03-30 00:54:41.142154 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.142158 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.142163 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.142168 | orchestrator | 2026-03-30 00:54:41.142172 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-30 00:54:41.142177 | orchestrator | Monday 30 March 2026 00:53:46 +0000 (0:00:00.443) 0:09:12.312 ********** 2026-03-30 00:54:41.142182 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.142187 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.142191 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.142196 | orchestrator | 2026-03-30 00:54:41.142201 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-30 00:54:41.142205 | orchestrator | Monday 30 March 2026 00:53:47 +0000 (0:00:00.277) 0:09:12.590 ********** 2026-03-30 00:54:41.142210 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142214 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142219 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142223 | orchestrator | 2026-03-30 00:54:41.142228 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-30 00:54:41.142232 | orchestrator | Monday 30 March 2026 00:53:47 +0000 (0:00:00.270) 0:09:12.860 ********** 2026-03-30 00:54:41.142237 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142241 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142246 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142250 | orchestrator | 2026-03-30 00:54:41.142255 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-30 00:54:41.142259 | orchestrator | Monday 30 March 2026 00:53:47 +0000 (0:00:00.263) 0:09:13.123 ********** 2026-03-30 00:54:41.142264 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142268 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142276 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142280 | orchestrator | 2026-03-30 00:54:41.142285 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-30 00:54:41.142289 | orchestrator | Monday 30 March 2026 00:53:48 +0000 (0:00:00.568) 0:09:13.692 ********** 2026-03-30 00:54:41.142294 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.142298 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.142303 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.142307 | orchestrator | 2026-03-30 00:54:41.142312 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-30 00:54:41.142317 | orchestrator | Monday 30 March 2026 00:53:48 +0000 (0:00:00.341) 0:09:14.033 ********** 2026-03-30 00:54:41.142325 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.142332 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.142340 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.142347 | orchestrator | 2026-03-30 00:54:41.142354 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-30 00:54:41.142362 | orchestrator | Monday 30 March 2026 00:53:49 +0000 (0:00:00.514) 0:09:14.547 ********** 2026-03-30 00:54:41.142369 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.142376 | orchestrator | 2026-03-30 00:54:41.142384 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-30 00:54:41.142391 | orchestrator | Monday 30 March 2026 00:53:49 +0000 (0:00:00.787) 0:09:15.335 ********** 2026-03-30 00:54:41.142399 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142407 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-30 00:54:41.142415 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:54:41.142420 | orchestrator | 2026-03-30 00:54:41.142429 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-30 00:54:41.142442 | orchestrator | Monday 30 March 2026 00:53:51 +0000 (0:00:01.769) 0:09:17.105 ********** 2026-03-30 00:54:41.142450 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 00:54:41.142457 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-30 00:54:41.142465 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.142473 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 00:54:41.142481 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-30 00:54:41.142489 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.142497 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 00:54:41.142505 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-30 00:54:41.142510 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.142515 | orchestrator | 2026-03-30 00:54:41.142519 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-30 00:54:41.142524 | orchestrator | Monday 30 March 2026 00:53:52 +0000 (0:00:01.236) 0:09:18.342 ********** 2026-03-30 00:54:41.142529 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142533 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142538 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142542 | orchestrator | 2026-03-30 00:54:41.142547 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-30 00:54:41.142551 | orchestrator | Monday 30 March 2026 00:53:53 +0000 (0:00:00.323) 0:09:18.666 ********** 2026-03-30 00:54:41.142556 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.142561 | orchestrator | 2026-03-30 00:54:41.142565 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-30 00:54:41.142570 | orchestrator | Monday 30 March 2026 00:53:54 +0000 (0:00:00.795) 0:09:19.461 ********** 2026-03-30 00:54:41.142575 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.142591 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.142597 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.142602 | orchestrator | 2026-03-30 00:54:41.142606 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-30 00:54:41.142611 | orchestrator | Monday 30 March 2026 00:53:54 +0000 (0:00:00.840) 0:09:20.301 ********** 2026-03-30 00:54:41.142615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142620 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-30 00:54:41.142625 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142629 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-30 00:54:41.142634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142638 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-30 00:54:41.142643 | orchestrator | 2026-03-30 00:54:41.142647 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-30 00:54:41.142652 | orchestrator | Monday 30 March 2026 00:53:58 +0000 (0:00:03.378) 0:09:23.680 ********** 2026-03-30 00:54:41.142656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142666 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:54:41.142671 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142679 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:54:41.142684 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:54:41.142688 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:54:41.142693 | orchestrator | 2026-03-30 00:54:41.142697 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-30 00:54:41.142702 | orchestrator | Monday 30 March 2026 00:54:00 +0000 (0:00:02.422) 0:09:26.103 ********** 2026-03-30 00:54:41.142706 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 00:54:41.142711 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.142715 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 00:54:41.142720 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.142724 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 00:54:41.142729 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.142734 | orchestrator | 2026-03-30 00:54:41.142738 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-30 00:54:41.142743 | orchestrator | Monday 30 March 2026 00:54:01 +0000 (0:00:01.231) 0:09:27.334 ********** 2026-03-30 00:54:41.142747 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-30 00:54:41.142752 | orchestrator | 2026-03-30 00:54:41.142756 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-30 00:54:41.142761 | orchestrator | Monday 30 March 2026 00:54:02 +0000 (0:00:00.223) 0:09:27.558 ********** 2026-03-30 00:54:41.142769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142793 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142797 | orchestrator | 2026-03-30 00:54:41.142802 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-30 00:54:41.142806 | orchestrator | Monday 30 March 2026 00:54:02 +0000 (0:00:00.594) 0:09:28.153 ********** 2026-03-30 00:54:41.142811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-30 00:54:41.142834 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142838 | orchestrator | 2026-03-30 00:54:41.142843 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-30 00:54:41.142847 | orchestrator | Monday 30 March 2026 00:54:03 +0000 (0:00:00.561) 0:09:28.714 ********** 2026-03-30 00:54:41.142852 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-30 00:54:41.142859 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-30 00:54:41.142864 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-30 00:54:41.142869 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-30 00:54:41.142873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-30 00:54:41.142878 | orchestrator | 2026-03-30 00:54:41.142882 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-30 00:54:41.142887 | orchestrator | Monday 30 March 2026 00:54:25 +0000 (0:00:22.051) 0:09:50.766 ********** 2026-03-30 00:54:41.142891 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142896 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142900 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142905 | orchestrator | 2026-03-30 00:54:41.142911 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-30 00:54:41.142916 | orchestrator | Monday 30 March 2026 00:54:25 +0000 (0:00:00.315) 0:09:51.082 ********** 2026-03-30 00:54:41.142921 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.142925 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.142930 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.142934 | orchestrator | 2026-03-30 00:54:41.142939 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-30 00:54:41.142943 | orchestrator | Monday 30 March 2026 00:54:26 +0000 (0:00:00.563) 0:09:51.646 ********** 2026-03-30 00:54:41.142948 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.142952 | orchestrator | 2026-03-30 00:54:41.142957 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-30 00:54:41.142961 | orchestrator | Monday 30 March 2026 00:54:26 +0000 (0:00:00.516) 0:09:52.163 ********** 2026-03-30 00:54:41.142966 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.142970 | orchestrator | 2026-03-30 00:54:41.142975 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-30 00:54:41.142979 | orchestrator | Monday 30 March 2026 00:54:27 +0000 (0:00:00.731) 0:09:52.894 ********** 2026-03-30 00:54:41.142984 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.142989 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.142993 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.142998 | orchestrator | 2026-03-30 00:54:41.143002 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-30 00:54:41.143010 | orchestrator | Monday 30 March 2026 00:54:28 +0000 (0:00:01.424) 0:09:54.318 ********** 2026-03-30 00:54:41.143015 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.143019 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.143024 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.143028 | orchestrator | 2026-03-30 00:54:41.143033 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-30 00:54:41.143038 | orchestrator | Monday 30 March 2026 00:54:30 +0000 (0:00:01.273) 0:09:55.592 ********** 2026-03-30 00:54:41.143042 | orchestrator | changed: [testbed-node-3] 2026-03-30 00:54:41.143047 | orchestrator | changed: [testbed-node-5] 2026-03-30 00:54:41.143051 | orchestrator | changed: [testbed-node-4] 2026-03-30 00:54:41.143056 | orchestrator | 2026-03-30 00:54:41.143060 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-30 00:54:41.143065 | orchestrator | Monday 30 March 2026 00:54:32 +0000 (0:00:02.073) 0:09:57.665 ********** 2026-03-30 00:54:41.143072 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.143077 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.143081 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-30 00:54:41.143086 | orchestrator | 2026-03-30 00:54:41.143090 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-30 00:54:41.143095 | orchestrator | Monday 30 March 2026 00:54:35 +0000 (0:00:02.758) 0:10:00.424 ********** 2026-03-30 00:54:41.143099 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.143104 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.143109 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.143113 | orchestrator | 2026-03-30 00:54:41.143118 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-30 00:54:41.143122 | orchestrator | Monday 30 March 2026 00:54:35 +0000 (0:00:00.328) 0:10:00.752 ********** 2026-03-30 00:54:41.143127 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:54:41.143131 | orchestrator | 2026-03-30 00:54:41.143136 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-30 00:54:41.143141 | orchestrator | Monday 30 March 2026 00:54:36 +0000 (0:00:00.756) 0:10:01.509 ********** 2026-03-30 00:54:41.143145 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.143150 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.143154 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.143159 | orchestrator | 2026-03-30 00:54:41.143163 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-30 00:54:41.143168 | orchestrator | Monday 30 March 2026 00:54:36 +0000 (0:00:00.350) 0:10:01.860 ********** 2026-03-30 00:54:41.143172 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.143177 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:54:41.143181 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:54:41.143186 | orchestrator | 2026-03-30 00:54:41.143190 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-30 00:54:41.143195 | orchestrator | Monday 30 March 2026 00:54:36 +0000 (0:00:00.328) 0:10:02.188 ********** 2026-03-30 00:54:41.143199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:54:41.143204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:54:41.143209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:54:41.143213 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:54:41.143218 | orchestrator | 2026-03-30 00:54:41.143222 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-30 00:54:41.143227 | orchestrator | Monday 30 March 2026 00:54:37 +0000 (0:00:01.066) 0:10:03.254 ********** 2026-03-30 00:54:41.143231 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:54:41.143236 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:54:41.143241 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:54:41.143245 | orchestrator | 2026-03-30 00:54:41.143250 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:54:41.143256 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-30 00:54:41.143261 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-30 00:54:41.143266 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-30 00:54:41.143271 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-30 00:54:41.143278 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-30 00:54:41.143282 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-30 00:54:41.143287 | orchestrator | 2026-03-30 00:54:41.143292 | orchestrator | 2026-03-30 00:54:41.143296 | orchestrator | 2026-03-30 00:54:41.143301 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:54:41.143305 | orchestrator | Monday 30 March 2026 00:54:38 +0000 (0:00:00.254) 0:10:03.509 ********** 2026-03-30 00:54:41.143310 | orchestrator | =============================================================================== 2026-03-30 00:54:41.143317 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.70s 2026-03-30 00:54:41.143322 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 36.75s 2026-03-30 00:54:41.143326 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 22.05s 2026-03-30 00:54:41.143331 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.48s 2026-03-30 00:54:41.143336 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.16s 2026-03-30 00:54:41.143340 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.94s 2026-03-30 00:54:41.143345 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 8.68s 2026-03-30 00:54:41.143349 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.33s 2026-03-30 00:54:41.143354 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.13s 2026-03-30 00:54:41.143358 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.19s 2026-03-30 00:54:41.143363 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 5.88s 2026-03-30 00:54:41.143367 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 5.78s 2026-03-30 00:54:41.143372 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.49s 2026-03-30 00:54:41.143376 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.26s 2026-03-30 00:54:41.143381 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.71s 2026-03-30 00:54:41.143385 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.44s 2026-03-30 00:54:41.143390 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.38s 2026-03-30 00:54:41.143394 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.16s 2026-03-30 00:54:41.143399 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.04s 2026-03-30 00:54:41.143403 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 2.90s 2026-03-30 00:54:41.143408 | orchestrator | 2026-03-30 00:54:41 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:41.143412 | orchestrator | 2026-03-30 00:54:41 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:41.143417 | orchestrator | 2026-03-30 00:54:41 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:41.143422 | orchestrator | 2026-03-30 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:44.178780 | orchestrator | 2026-03-30 00:54:44 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:44.181559 | orchestrator | 2026-03-30 00:54:44 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:44.184048 | orchestrator | 2026-03-30 00:54:44 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:44.184244 | orchestrator | 2026-03-30 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:47.234321 | orchestrator | 2026-03-30 00:54:47 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:47.237124 | orchestrator | 2026-03-30 00:54:47 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:47.238327 | orchestrator | 2026-03-30 00:54:47 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:47.240007 | orchestrator | 2026-03-30 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:50.291517 | orchestrator | 2026-03-30 00:54:50 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:50.295749 | orchestrator | 2026-03-30 00:54:50 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:50.296790 | orchestrator | 2026-03-30 00:54:50 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:50.297011 | orchestrator | 2026-03-30 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:53.345638 | orchestrator | 2026-03-30 00:54:53 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:53.350577 | orchestrator | 2026-03-30 00:54:53 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:53.352934 | orchestrator | 2026-03-30 00:54:53 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:53.352993 | orchestrator | 2026-03-30 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:56.399109 | orchestrator | 2026-03-30 00:54:56 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:56.400251 | orchestrator | 2026-03-30 00:54:56 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:56.401925 | orchestrator | 2026-03-30 00:54:56 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:56.401966 | orchestrator | 2026-03-30 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:54:59.440094 | orchestrator | 2026-03-30 00:54:59 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:54:59.440179 | orchestrator | 2026-03-30 00:54:59 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:54:59.440457 | orchestrator | 2026-03-30 00:54:59 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:54:59.440478 | orchestrator | 2026-03-30 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:02.495100 | orchestrator | 2026-03-30 00:55:02 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:02.496465 | orchestrator | 2026-03-30 00:55:02 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:02.498635 | orchestrator | 2026-03-30 00:55:02 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:02.498938 | orchestrator | 2026-03-30 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:05.545998 | orchestrator | 2026-03-30 00:55:05 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:05.548939 | orchestrator | 2026-03-30 00:55:05 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:05.549484 | orchestrator | 2026-03-30 00:55:05 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:05.550064 | orchestrator | 2026-03-30 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:08.591120 | orchestrator | 2026-03-30 00:55:08 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:08.591721 | orchestrator | 2026-03-30 00:55:08 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:08.592828 | orchestrator | 2026-03-30 00:55:08 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:08.592922 | orchestrator | 2026-03-30 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:11.641727 | orchestrator | 2026-03-30 00:55:11 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:11.642373 | orchestrator | 2026-03-30 00:55:11 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:11.643493 | orchestrator | 2026-03-30 00:55:11 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:11.643587 | orchestrator | 2026-03-30 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:14.687812 | orchestrator | 2026-03-30 00:55:14 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:14.689184 | orchestrator | 2026-03-30 00:55:14 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:14.691064 | orchestrator | 2026-03-30 00:55:14 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:14.691118 | orchestrator | 2026-03-30 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:17.736416 | orchestrator | 2026-03-30 00:55:17 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:17.738592 | orchestrator | 2026-03-30 00:55:17 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:17.741385 | orchestrator | 2026-03-30 00:55:17 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:17.741459 | orchestrator | 2026-03-30 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:20.785764 | orchestrator | 2026-03-30 00:55:20 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:20.786722 | orchestrator | 2026-03-30 00:55:20 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:20.788766 | orchestrator | 2026-03-30 00:55:20 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:20.788825 | orchestrator | 2026-03-30 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:23.832672 | orchestrator | 2026-03-30 00:55:23 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:23.834374 | orchestrator | 2026-03-30 00:55:23 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:23.836114 | orchestrator | 2026-03-30 00:55:23 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:23.836173 | orchestrator | 2026-03-30 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:26.887624 | orchestrator | 2026-03-30 00:55:26 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:26.890093 | orchestrator | 2026-03-30 00:55:26 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:26.892728 | orchestrator | 2026-03-30 00:55:26 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:26.892804 | orchestrator | 2026-03-30 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:29.940825 | orchestrator | 2026-03-30 00:55:29 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:29.944052 | orchestrator | 2026-03-30 00:55:29 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:29.946420 | orchestrator | 2026-03-30 00:55:29 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:29.946567 | orchestrator | 2026-03-30 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:32.991904 | orchestrator | 2026-03-30 00:55:32 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:32.993807 | orchestrator | 2026-03-30 00:55:32 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:32.997447 | orchestrator | 2026-03-30 00:55:32 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:32.997539 | orchestrator | 2026-03-30 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:36.042653 | orchestrator | 2026-03-30 00:55:36 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:36.043534 | orchestrator | 2026-03-30 00:55:36 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:36.043927 | orchestrator | 2026-03-30 00:55:36 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:36.043959 | orchestrator | 2026-03-30 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:39.086988 | orchestrator | 2026-03-30 00:55:39 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:39.088114 | orchestrator | 2026-03-30 00:55:39 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:39.089848 | orchestrator | 2026-03-30 00:55:39 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:39.090099 | orchestrator | 2026-03-30 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:42.139604 | orchestrator | 2026-03-30 00:55:42 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:42.142216 | orchestrator | 2026-03-30 00:55:42 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:42.144549 | orchestrator | 2026-03-30 00:55:42 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:42.144613 | orchestrator | 2026-03-30 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:45.193064 | orchestrator | 2026-03-30 00:55:45 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:45.195114 | orchestrator | 2026-03-30 00:55:45 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:45.197377 | orchestrator | 2026-03-30 00:55:45 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state STARTED 2026-03-30 00:55:45.197427 | orchestrator | 2026-03-30 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:48.244602 | orchestrator | 2026-03-30 00:55:48 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:48.247193 | orchestrator | 2026-03-30 00:55:48 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:48.249691 | orchestrator | 2026-03-30 00:55:48 | INFO  | Task 5f4192a1-a6d7-4f9d-9097-399600dbaf88 is in state SUCCESS 2026-03-30 00:55:48.251203 | orchestrator | 2026-03-30 00:55:48.251240 | orchestrator | 2026-03-30 00:55:48.251249 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:55:48.251256 | orchestrator | 2026-03-30 00:55:48.251281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:55:48.251292 | orchestrator | Monday 30 March 2026 00:53:08 +0000 (0:00:00.271) 0:00:00.271 ********** 2026-03-30 00:55:48.251303 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:55:48.251314 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:55:48.251323 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:55:48.251333 | orchestrator | 2026-03-30 00:55:48.251344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:55:48.251355 | orchestrator | Monday 30 March 2026 00:53:08 +0000 (0:00:00.275) 0:00:00.547 ********** 2026-03-30 00:55:48.251366 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-30 00:55:48.251376 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-30 00:55:48.251386 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-30 00:55:48.251463 | orchestrator | 2026-03-30 00:55:48.251571 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-30 00:55:48.251582 | orchestrator | 2026-03-30 00:55:48.251588 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-30 00:55:48.251595 | orchestrator | Monday 30 March 2026 00:53:09 +0000 (0:00:00.267) 0:00:00.815 ********** 2026-03-30 00:55:48.251601 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:55:48.251608 | orchestrator | 2026-03-30 00:55:48.251614 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-30 00:55:48.251620 | orchestrator | Monday 30 March 2026 00:53:09 +0000 (0:00:00.517) 0:00:01.332 ********** 2026-03-30 00:55:48.251626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-30 00:55:48.251632 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-30 00:55:48.251639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-30 00:55:48.251645 | orchestrator | 2026-03-30 00:55:48.251654 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-30 00:55:48.251660 | orchestrator | Monday 30 March 2026 00:53:11 +0000 (0:00:01.969) 0:00:03.302 ********** 2026-03-30 00:55:48.251668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.251691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.251707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.251723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.251730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.251741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.251748 | orchestrator | 2026-03-30 00:55:48.251759 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-30 00:55:48.251770 | orchestrator | Monday 30 March 2026 00:53:12 +0000 (0:00:01.300) 0:00:04.603 ********** 2026-03-30 00:55:48.251782 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:55:48.251797 | orchestrator | 2026-03-30 00:55:48.251807 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-30 00:55:48.251817 | orchestrator | Monday 30 March 2026 00:53:13 +0000 (0:00:00.444) 0:00:05.047 ********** 2026-03-30 00:55:48.251836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.251849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.251860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.251870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.251892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.251904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.251919 | orchestrator | 2026-03-30 00:55:48.251931 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-30 00:55:48.251941 | orchestrator | Monday 30 March 2026 00:53:16 +0000 (0:00:02.870) 0:00:07.918 ********** 2026-03-30 00:55:48.251952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:55:48.251969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:55:48.251986 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:55:48.251994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:55:48.252007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:55:48.252014 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:55:48.252020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:55:48.252027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:55:48.252037 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:55:48.252044 | orchestrator | 2026-03-30 00:55:48.252050 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-30 00:55:48.252056 | orchestrator | Monday 30 March 2026 00:53:16 +0000 (0:00:00.491) 0:00:08.409 ********** 2026-03-30 00:55:48.252065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:55:48.252077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:55:48.252084 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:55:48.252091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:55:48.252098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:55:48.252108 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:55:48.252117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-30 00:55:48.252130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-30 00:55:48.252136 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:55:48.252143 | orchestrator | 2026-03-30 00:55:48.252149 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-30 00:55:48.252155 | orchestrator | Monday 30 March 2026 00:53:17 +0000 (0:00:00.763) 0:00:09.173 ********** 2026-03-30 00:55:48.252162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.252169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.252182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.252194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.252202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.252209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.252220 | orchestrator | 2026-03-30 00:55:48.252226 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-30 00:55:48.252233 | orchestrator | Monday 30 March 2026 00:53:19 +0000 (0:00:02.377) 0:00:11.551 ********** 2026-03-30 00:55:48.252239 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:55:48.252245 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:55:48.252251 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:55:48.252258 | orchestrator | 2026-03-30 00:55:48.252264 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-30 00:55:48.252270 | orchestrator | Monday 30 March 2026 00:53:22 +0000 (0:00:02.181) 0:00:13.732 ********** 2026-03-30 00:55:48.252276 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:55:48.252284 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:55:48.252294 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:55:48.252309 | orchestrator | 2026-03-30 00:55:48.252320 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-30 00:55:48.252330 | orchestrator | Monday 30 March 2026 00:53:23 +0000 (0:00:01.783) 0:00:15.516 ********** 2026-03-30 00:55:48.252348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.252367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.252378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-30 00:55:48.252397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.252407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.252419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-30 00:55:48.252426 | orchestrator | 2026-03-30 00:55:48.252475 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-30 00:55:48.252482 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:02.250) 0:00:17.766 ********** 2026-03-30 00:55:48.252488 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:55:48.252495 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:55:48.252501 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:55:48.252507 | orchestrator | 2026-03-30 00:55:48.252513 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-30 00:55:48.252519 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:00.428) 0:00:18.195 ********** 2026-03-30 00:55:48.252530 | orchestrator | 2026-03-30 00:55:48.252536 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-30 00:55:48.252542 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:00.063) 0:00:18.259 ********** 2026-03-30 00:55:48.252548 | orchestrator | 2026-03-30 00:55:48.252555 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-30 00:55:48.252561 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:00.061) 0:00:18.320 ********** 2026-03-30 00:55:48.252567 | orchestrator | 2026-03-30 00:55:48.252573 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-30 00:55:48.252579 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:00.063) 0:00:18.384 ********** 2026-03-30 00:55:48.252585 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:55:48.252591 | orchestrator | 2026-03-30 00:55:48.252597 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-30 00:55:48.252603 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:00.201) 0:00:18.585 ********** 2026-03-30 00:55:48.252609 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:55:48.252615 | orchestrator | 2026-03-30 00:55:48.252621 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-30 00:55:48.252627 | orchestrator | Monday 30 March 2026 00:53:27 +0000 (0:00:00.279) 0:00:18.865 ********** 2026-03-30 00:55:48.252633 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:55:48.252640 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:55:48.252646 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:55:48.252652 | orchestrator | 2026-03-30 00:55:48.252658 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-30 00:55:48.252664 | orchestrator | Monday 30 March 2026 00:54:18 +0000 (0:00:51.774) 0:01:10.640 ********** 2026-03-30 00:55:48.252670 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:55:48.252676 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:55:48.252682 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:55:48.252688 | orchestrator | 2026-03-30 00:55:48.252694 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-30 00:55:48.252700 | orchestrator | Monday 30 March 2026 00:55:31 +0000 (0:01:12.280) 0:02:22.920 ********** 2026-03-30 00:55:48.252706 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:55:48.252713 | orchestrator | 2026-03-30 00:55:48.252719 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-30 00:55:48.252725 | orchestrator | Monday 30 March 2026 00:55:31 +0000 (0:00:00.661) 0:02:23.582 ********** 2026-03-30 00:55:48.252731 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:55:48.252737 | orchestrator | 2026-03-30 00:55:48.252743 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-30 00:55:48.252749 | orchestrator | Monday 30 March 2026 00:55:34 +0000 (0:00:02.636) 0:02:26.218 ********** 2026-03-30 00:55:48.252757 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:55:48.252771 | orchestrator | 2026-03-30 00:55:48.252786 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-30 00:55:48.252796 | orchestrator | Monday 30 March 2026 00:55:36 +0000 (0:00:02.447) 0:02:28.666 ********** 2026-03-30 00:55:48.252807 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:55:48.252818 | orchestrator | 2026-03-30 00:55:48.252828 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-30 00:55:48.252838 | orchestrator | Monday 30 March 2026 00:55:39 +0000 (0:00:02.522) 0:02:31.189 ********** 2026-03-30 00:55:48.252847 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:55:48.252853 | orchestrator | 2026-03-30 00:55:48.252859 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-30 00:55:48.252865 | orchestrator | Monday 30 March 2026 00:55:42 +0000 (0:00:02.860) 0:02:34.049 ********** 2026-03-30 00:55:48.252871 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:55:48.252877 | orchestrator | 2026-03-30 00:55:48.252884 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:55:48.252896 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 00:55:48.252903 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:55:48.252914 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 00:55:48.252921 | orchestrator | 2026-03-30 00:55:48.252927 | orchestrator | 2026-03-30 00:55:48.252933 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:55:48.252939 | orchestrator | Monday 30 March 2026 00:55:45 +0000 (0:00:02.959) 0:02:37.009 ********** 2026-03-30 00:55:48.252945 | orchestrator | =============================================================================== 2026-03-30 00:55:48.252951 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 72.28s 2026-03-30 00:55:48.252960 | orchestrator | opensearch : Restart opensearch container ------------------------------ 51.77s 2026-03-30 00:55:48.252970 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.96s 2026-03-30 00:55:48.252988 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.87s 2026-03-30 00:55:48.252997 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.86s 2026-03-30 00:55:48.253006 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.64s 2026-03-30 00:55:48.253016 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.52s 2026-03-30 00:55:48.253025 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.45s 2026-03-30 00:55:48.253035 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2026-03-30 00:55:48.253044 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.25s 2026-03-30 00:55:48.253053 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.18s 2026-03-30 00:55:48.253062 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.97s 2026-03-30 00:55:48.253071 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.78s 2026-03-30 00:55:48.253081 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.30s 2026-03-30 00:55:48.253090 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.76s 2026-03-30 00:55:48.253100 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-03-30 00:55:48.253110 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-30 00:55:48.253120 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.49s 2026-03-30 00:55:48.253130 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2026-03-30 00:55:48.253140 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.43s 2026-03-30 00:55:48.253150 | orchestrator | 2026-03-30 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:51.296848 | orchestrator | 2026-03-30 00:55:51 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:51.298466 | orchestrator | 2026-03-30 00:55:51 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:51.298770 | orchestrator | 2026-03-30 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:54.341237 | orchestrator | 2026-03-30 00:55:54 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:54.342940 | orchestrator | 2026-03-30 00:55:54 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:54.342992 | orchestrator | 2026-03-30 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:55:57.381636 | orchestrator | 2026-03-30 00:55:57 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:55:57.383513 | orchestrator | 2026-03-30 00:55:57 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:55:57.383830 | orchestrator | 2026-03-30 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:00.435181 | orchestrator | 2026-03-30 00:56:00 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:00.437445 | orchestrator | 2026-03-30 00:56:00 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:56:00.437489 | orchestrator | 2026-03-30 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:03.488196 | orchestrator | 2026-03-30 00:56:03 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:03.490664 | orchestrator | 2026-03-30 00:56:03 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state STARTED 2026-03-30 00:56:03.490710 | orchestrator | 2026-03-30 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:06.549828 | orchestrator | 2026-03-30 00:56:06 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:06.550847 | orchestrator | 2026-03-30 00:56:06 | INFO  | Task 70d059d8-a8b1-45fe-b981-daa3107ab34b is in state SUCCESS 2026-03-30 00:56:06.550877 | orchestrator | 2026-03-30 00:56:06.552421 | orchestrator | 2026-03-30 00:56:06.552463 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-30 00:56:06.552472 | orchestrator | 2026-03-30 00:56:06.552479 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-30 00:56:06.552485 | orchestrator | Monday 30 March 2026 00:53:08 +0000 (0:00:00.086) 0:00:00.086 ********** 2026-03-30 00:56:06.552491 | orchestrator | ok: [localhost] => { 2026-03-30 00:56:06.552498 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-30 00:56:06.552504 | orchestrator | } 2026-03-30 00:56:06.552510 | orchestrator | 2026-03-30 00:56:06.552516 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-30 00:56:06.552522 | orchestrator | Monday 30 March 2026 00:53:08 +0000 (0:00:00.028) 0:00:00.114 ********** 2026-03-30 00:56:06.552528 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-30 00:56:06.552534 | orchestrator | ...ignoring 2026-03-30 00:56:06.552539 | orchestrator | 2026-03-30 00:56:06.552544 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-30 00:56:06.552550 | orchestrator | Monday 30 March 2026 00:53:11 +0000 (0:00:02.796) 0:00:02.910 ********** 2026-03-30 00:56:06.552555 | orchestrator | skipping: [localhost] 2026-03-30 00:56:06.552560 | orchestrator | 2026-03-30 00:56:06.552566 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-30 00:56:06.552572 | orchestrator | Monday 30 March 2026 00:53:11 +0000 (0:00:00.050) 0:00:02.960 ********** 2026-03-30 00:56:06.552578 | orchestrator | ok: [localhost] 2026-03-30 00:56:06.552584 | orchestrator | 2026-03-30 00:56:06.552589 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:56:06.552595 | orchestrator | 2026-03-30 00:56:06.552601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:56:06.552606 | orchestrator | Monday 30 March 2026 00:53:11 +0000 (0:00:00.190) 0:00:03.150 ********** 2026-03-30 00:56:06.552612 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.552617 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.552623 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.552629 | orchestrator | 2026-03-30 00:56:06.552634 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:56:06.552921 | orchestrator | Monday 30 March 2026 00:53:11 +0000 (0:00:00.266) 0:00:03.417 ********** 2026-03-30 00:56:06.552938 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-30 00:56:06.552945 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-30 00:56:06.552950 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-30 00:56:06.552956 | orchestrator | 2026-03-30 00:56:06.552961 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-30 00:56:06.552967 | orchestrator | 2026-03-30 00:56:06.552972 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-30 00:56:06.552978 | orchestrator | Monday 30 March 2026 00:53:12 +0000 (0:00:00.358) 0:00:03.775 ********** 2026-03-30 00:56:06.552983 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-30 00:56:06.552989 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-30 00:56:06.552995 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-30 00:56:06.553000 | orchestrator | 2026-03-30 00:56:06.553005 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-30 00:56:06.553011 | orchestrator | Monday 30 March 2026 00:53:12 +0000 (0:00:00.338) 0:00:04.114 ********** 2026-03-30 00:56:06.553016 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:56:06.553023 | orchestrator | 2026-03-30 00:56:06.553028 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-30 00:56:06.553034 | orchestrator | Monday 30 March 2026 00:53:13 +0000 (0:00:00.522) 0:00:04.636 ********** 2026-03-30 00:56:06.553062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553133 | orchestrator | 2026-03-30 00:56:06.553143 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-30 00:56:06.553148 | orchestrator | Monday 30 March 2026 00:53:16 +0000 (0:00:02.981) 0:00:07.618 ********** 2026-03-30 00:56:06.553153 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553159 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553164 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.553170 | orchestrator | 2026-03-30 00:56:06.553175 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-30 00:56:06.553180 | orchestrator | Monday 30 March 2026 00:53:16 +0000 (0:00:00.555) 0:00:08.174 ********** 2026-03-30 00:56:06.553186 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553191 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553196 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.553201 | orchestrator | 2026-03-30 00:56:06.553210 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-30 00:56:06.553216 | orchestrator | Monday 30 March 2026 00:53:17 +0000 (0:00:01.409) 0:00:09.584 ********** 2026-03-30 00:56:06.553222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553249 | orchestrator | 2026-03-30 00:56:06.553255 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-30 00:56:06.553260 | orchestrator | Monday 30 March 2026 00:53:21 +0000 (0:00:03.032) 0:00:12.616 ********** 2026-03-30 00:56:06.553265 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553270 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553275 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.553280 | orchestrator | 2026-03-30 00:56:06.553284 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-30 00:56:06.553290 | orchestrator | Monday 30 March 2026 00:53:22 +0000 (0:00:01.073) 0:00:13.690 ********** 2026-03-30 00:56:06.553294 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.553320 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:56:06.553325 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:56:06.553330 | orchestrator | 2026-03-30 00:56:06.553335 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-30 00:56:06.553339 | orchestrator | Monday 30 March 2026 00:53:25 +0000 (0:00:03.819) 0:00:17.510 ********** 2026-03-30 00:56:06.553344 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:56:06.553349 | orchestrator | 2026-03-30 00:56:06.553354 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-30 00:56:06.553359 | orchestrator | Monday 30 March 2026 00:53:26 +0000 (0:00:00.497) 0:00:18.007 ********** 2026-03-30 00:56:06.553371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553384 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553408 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.553418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553427 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553432 | orchestrator | 2026-03-30 00:56:06.553437 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-30 00:56:06.553442 | orchestrator | Monday 30 March 2026 00:53:29 +0000 (0:00:03.220) 0:00:21.228 ********** 2026-03-30 00:56:06.553447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553452 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.553461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553470 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553480 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553485 | orchestrator | 2026-03-30 00:56:06.553490 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-30 00:56:06.553495 | orchestrator | Monday 30 March 2026 00:53:32 +0000 (0:00:03.043) 0:00:24.271 ********** 2026-03-30 00:56:06.553502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553511 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553525 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.553532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-30 00:56:06.553541 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553546 | orchestrator | 2026-03-30 00:56:06.553552 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-30 00:56:06.553556 | orchestrator | Monday 30 March 2026 00:53:36 +0000 (0:00:03.626) 0:00:27.897 ********** 2026-03-30 00:56:06.553565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-30 00:56:06.553591 | orchestrator | 2026-03-30 00:56:06.553596 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-30 00:56:06.553601 | orchestrator | Monday 30 March 2026 00:53:39 +0000 (0:00:03.280) 0:00:31.178 ********** 2026-03-30 00:56:06.553605 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.553610 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:56:06.553615 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:56:06.553620 | orchestrator | 2026-03-30 00:56:06.553625 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-30 00:56:06.553630 | orchestrator | Monday 30 March 2026 00:53:40 +0000 (0:00:00.875) 0:00:32.054 ********** 2026-03-30 00:56:06.553692 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.553698 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.553704 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.553709 | orchestrator | 2026-03-30 00:56:06.553715 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-30 00:56:06.553722 | orchestrator | Monday 30 March 2026 00:53:40 +0000 (0:00:00.403) 0:00:32.458 ********** 2026-03-30 00:56:06.553727 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.553733 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.553739 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.553745 | orchestrator | 2026-03-30 00:56:06.553751 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-30 00:56:06.553756 | orchestrator | Monday 30 March 2026 00:53:41 +0000 (0:00:00.373) 0:00:32.832 ********** 2026-03-30 00:56:06.553763 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-30 00:56:06.553769 | orchestrator | ...ignoring 2026-03-30 00:56:06.553832 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-30 00:56:06.553849 | orchestrator | ...ignoring 2026-03-30 00:56:06.553855 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-30 00:56:06.553860 | orchestrator | ...ignoring 2026-03-30 00:56:06.553865 | orchestrator | 2026-03-30 00:56:06.553871 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-30 00:56:06.553876 | orchestrator | Monday 30 March 2026 00:53:52 +0000 (0:00:11.040) 0:00:43.873 ********** 2026-03-30 00:56:06.553882 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.553887 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.553893 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.553898 | orchestrator | 2026-03-30 00:56:06.553903 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-30 00:56:06.553909 | orchestrator | Monday 30 March 2026 00:53:52 +0000 (0:00:00.445) 0:00:44.318 ********** 2026-03-30 00:56:06.553914 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.553919 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553924 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553930 | orchestrator | 2026-03-30 00:56:06.553935 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-30 00:56:06.553940 | orchestrator | Monday 30 March 2026 00:53:53 +0000 (0:00:00.410) 0:00:44.729 ********** 2026-03-30 00:56:06.553945 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.553955 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553960 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553965 | orchestrator | 2026-03-30 00:56:06.553971 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-30 00:56:06.553976 | orchestrator | Monday 30 March 2026 00:53:53 +0000 (0:00:00.430) 0:00:45.159 ********** 2026-03-30 00:56:06.553981 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.553986 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.553992 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.553997 | orchestrator | 2026-03-30 00:56:06.554003 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-30 00:56:06.554008 | orchestrator | Monday 30 March 2026 00:53:54 +0000 (0:00:00.803) 0:00:45.963 ********** 2026-03-30 00:56:06.554046 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.554052 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.554057 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.554063 | orchestrator | 2026-03-30 00:56:06.554069 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-30 00:56:06.554075 | orchestrator | Monday 30 March 2026 00:53:54 +0000 (0:00:00.424) 0:00:46.387 ********** 2026-03-30 00:56:06.554086 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.554093 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.554098 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.554104 | orchestrator | 2026-03-30 00:56:06.554109 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-30 00:56:06.554115 | orchestrator | Monday 30 March 2026 00:53:55 +0000 (0:00:00.402) 0:00:46.790 ********** 2026-03-30 00:56:06.554120 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.554126 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.554131 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-30 00:56:06.554137 | orchestrator | 2026-03-30 00:56:06.554210 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-30 00:56:06.554217 | orchestrator | Monday 30 March 2026 00:53:55 +0000 (0:00:00.370) 0:00:47.160 ********** 2026-03-30 00:56:06.554222 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.554227 | orchestrator | 2026-03-30 00:56:06.554233 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-30 00:56:06.554239 | orchestrator | Monday 30 March 2026 00:54:06 +0000 (0:00:10.608) 0:00:57.769 ********** 2026-03-30 00:56:06.554244 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.554258 | orchestrator | 2026-03-30 00:56:06.554264 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-30 00:56:06.554270 | orchestrator | Monday 30 March 2026 00:54:06 +0000 (0:00:00.228) 0:00:57.998 ********** 2026-03-30 00:56:06.554275 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.554280 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.554286 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.554291 | orchestrator | 2026-03-30 00:56:06.554296 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-30 00:56:06.554301 | orchestrator | Monday 30 March 2026 00:54:07 +0000 (0:00:00.747) 0:00:58.745 ********** 2026-03-30 00:56:06.554307 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.554313 | orchestrator | 2026-03-30 00:56:06.554319 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-30 00:56:06.554324 | orchestrator | Monday 30 March 2026 00:54:14 +0000 (0:00:06.894) 0:01:05.640 ********** 2026-03-30 00:56:06.554329 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.554335 | orchestrator | 2026-03-30 00:56:06.554340 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-30 00:56:06.554346 | orchestrator | Monday 30 March 2026 00:54:15 +0000 (0:00:01.602) 0:01:07.242 ********** 2026-03-30 00:56:06.554351 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.554356 | orchestrator | 2026-03-30 00:56:06.554362 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-30 00:56:06.554367 | orchestrator | Monday 30 March 2026 00:54:18 +0000 (0:00:02.699) 0:01:09.942 ********** 2026-03-30 00:56:06.554372 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.554378 | orchestrator | 2026-03-30 00:56:06.554383 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-30 00:56:06.554480 | orchestrator | Monday 30 March 2026 00:54:18 +0000 (0:00:00.121) 0:01:10.064 ********** 2026-03-30 00:56:06.554489 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.554495 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.554501 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.554507 | orchestrator | 2026-03-30 00:56:06.554513 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-30 00:56:06.554519 | orchestrator | Monday 30 March 2026 00:54:18 +0000 (0:00:00.297) 0:01:10.361 ********** 2026-03-30 00:56:06.554525 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.554531 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:56:06.554536 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:56:06.554541 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-30 00:56:06.554547 | orchestrator | 2026-03-30 00:56:06.554553 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-30 00:56:06.554559 | orchestrator | skipping: no hosts matched 2026-03-30 00:56:06.554565 | orchestrator | 2026-03-30 00:56:06.554571 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-30 00:56:06.554577 | orchestrator | 2026-03-30 00:56:06.554584 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-30 00:56:06.554589 | orchestrator | Monday 30 March 2026 00:54:19 +0000 (0:00:00.277) 0:01:10.639 ********** 2026-03-30 00:56:06.554595 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:56:06.554601 | orchestrator | 2026-03-30 00:56:06.554608 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-30 00:56:06.554614 | orchestrator | Monday 30 March 2026 00:54:36 +0000 (0:00:17.244) 0:01:27.884 ********** 2026-03-30 00:56:06.554676 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.554684 | orchestrator | 2026-03-30 00:56:06.554690 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-30 00:56:06.554696 | orchestrator | Monday 30 March 2026 00:54:51 +0000 (0:00:15.612) 0:01:43.496 ********** 2026-03-30 00:56:06.554708 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.554737 | orchestrator | 2026-03-30 00:56:06.554755 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-30 00:56:06.554762 | orchestrator | 2026-03-30 00:56:06.554768 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-30 00:56:06.554775 | orchestrator | Monday 30 March 2026 00:54:54 +0000 (0:00:02.536) 0:01:46.033 ********** 2026-03-30 00:56:06.554780 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:56:06.554786 | orchestrator | 2026-03-30 00:56:06.554792 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-30 00:56:06.554798 | orchestrator | Monday 30 March 2026 00:55:17 +0000 (0:00:22.995) 0:02:09.028 ********** 2026-03-30 00:56:06.554804 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.554810 | orchestrator | 2026-03-30 00:56:06.554816 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-30 00:56:06.554822 | orchestrator | Monday 30 March 2026 00:55:29 +0000 (0:00:11.920) 0:02:20.949 ********** 2026-03-30 00:56:06.554829 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.554834 | orchestrator | 2026-03-30 00:56:06.554841 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-30 00:56:06.554847 | orchestrator | 2026-03-30 00:56:06.554860 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-30 00:56:06.554867 | orchestrator | Monday 30 March 2026 00:55:31 +0000 (0:00:02.348) 0:02:23.297 ********** 2026-03-30 00:56:06.554872 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.554879 | orchestrator | 2026-03-30 00:56:06.554884 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-30 00:56:06.554889 | orchestrator | Monday 30 March 2026 00:55:43 +0000 (0:00:11.529) 0:02:34.827 ********** 2026-03-30 00:56:06.554894 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.554900 | orchestrator | 2026-03-30 00:56:06.554905 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-30 00:56:06.554910 | orchestrator | Monday 30 March 2026 00:55:47 +0000 (0:00:04.621) 0:02:39.448 ********** 2026-03-30 00:56:06.554915 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.554920 | orchestrator | 2026-03-30 00:56:06.554991 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-30 00:56:06.555001 | orchestrator | 2026-03-30 00:56:06.555006 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-30 00:56:06.555011 | orchestrator | Monday 30 March 2026 00:55:50 +0000 (0:00:02.479) 0:02:41.928 ********** 2026-03-30 00:56:06.555016 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:56:06.555021 | orchestrator | 2026-03-30 00:56:06.555026 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-30 00:56:06.555032 | orchestrator | Monday 30 March 2026 00:55:50 +0000 (0:00:00.555) 0:02:42.483 ********** 2026-03-30 00:56:06.555037 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.555042 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.555048 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.555053 | orchestrator | 2026-03-30 00:56:06.555058 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-30 00:56:06.555063 | orchestrator | Monday 30 March 2026 00:55:53 +0000 (0:00:02.723) 0:02:45.207 ********** 2026-03-30 00:56:06.555069 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.555074 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.555079 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.555084 | orchestrator | 2026-03-30 00:56:06.555089 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-30 00:56:06.555094 | orchestrator | Monday 30 March 2026 00:55:56 +0000 (0:00:02.575) 0:02:47.783 ********** 2026-03-30 00:56:06.555099 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.555104 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.555109 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.555132 | orchestrator | 2026-03-30 00:56:06.555137 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-30 00:56:06.555150 | orchestrator | Monday 30 March 2026 00:55:58 +0000 (0:00:02.110) 0:02:49.893 ********** 2026-03-30 00:56:06.555155 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.555160 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.555165 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:56:06.555171 | orchestrator | 2026-03-30 00:56:06.555176 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-30 00:56:06.555181 | orchestrator | Monday 30 March 2026 00:56:00 +0000 (0:00:02.236) 0:02:52.129 ********** 2026-03-30 00:56:06.555186 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:56:06.555192 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:56:06.555198 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:56:06.555203 | orchestrator | 2026-03-30 00:56:06.555208 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-30 00:56:06.555213 | orchestrator | Monday 30 March 2026 00:56:03 +0000 (0:00:02.821) 0:02:54.952 ********** 2026-03-30 00:56:06.555218 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:56:06.555223 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:56:06.555228 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:56:06.555233 | orchestrator | 2026-03-30 00:56:06.555239 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:56:06.555245 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-30 00:56:06.555251 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-30 00:56:06.555258 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-30 00:56:06.555263 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-30 00:56:06.555269 | orchestrator | 2026-03-30 00:56:06.555274 | orchestrator | 2026-03-30 00:56:06.555284 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:56:06.555289 | orchestrator | Monday 30 March 2026 00:56:03 +0000 (0:00:00.277) 0:02:55.229 ********** 2026-03-30 00:56:06.555294 | orchestrator | =============================================================================== 2026-03-30 00:56:06.555300 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.24s 2026-03-30 00:56:06.555305 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.53s 2026-03-30 00:56:06.555312 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.53s 2026-03-30 00:56:06.555317 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.04s 2026-03-30 00:56:06.555323 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.61s 2026-03-30 00:56:06.555328 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.89s 2026-03-30 00:56:06.555339 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.89s 2026-03-30 00:56:06.555346 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.62s 2026-03-30 00:56:06.555351 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.82s 2026-03-30 00:56:06.555357 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.63s 2026-03-30 00:56:06.555362 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.28s 2026-03-30 00:56:06.555368 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.22s 2026-03-30 00:56:06.555373 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.04s 2026-03-30 00:56:06.555379 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.03s 2026-03-30 00:56:06.555385 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.98s 2026-03-30 00:56:06.555410 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.82s 2026-03-30 00:56:06.555417 | orchestrator | Check MariaDB service --------------------------------------------------- 2.80s 2026-03-30 00:56:06.555422 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.72s 2026-03-30 00:56:06.555428 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.70s 2026-03-30 00:56:06.555434 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.58s 2026-03-30 00:56:06.555440 | orchestrator | 2026-03-30 00:56:06 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:06.556480 | orchestrator | 2026-03-30 00:56:06 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:06.556513 | orchestrator | 2026-03-30 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:09.599213 | orchestrator | 2026-03-30 00:56:09 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:09.600148 | orchestrator | 2026-03-30 00:56:09 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:09.601422 | orchestrator | 2026-03-30 00:56:09 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:09.601882 | orchestrator | 2026-03-30 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:12.649306 | orchestrator | 2026-03-30 00:56:12 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:12.650800 | orchestrator | 2026-03-30 00:56:12 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:12.650875 | orchestrator | 2026-03-30 00:56:12 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:12.650982 | orchestrator | 2026-03-30 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:15.704427 | orchestrator | 2026-03-30 00:56:15 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:15.705024 | orchestrator | 2026-03-30 00:56:15 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:15.708344 | orchestrator | 2026-03-30 00:56:15 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:15.708438 | orchestrator | 2026-03-30 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:18.736322 | orchestrator | 2026-03-30 00:56:18 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:18.737669 | orchestrator | 2026-03-30 00:56:18 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:18.740098 | orchestrator | 2026-03-30 00:56:18 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:18.740158 | orchestrator | 2026-03-30 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:21.777210 | orchestrator | 2026-03-30 00:56:21 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:21.777426 | orchestrator | 2026-03-30 00:56:21 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:21.778919 | orchestrator | 2026-03-30 00:56:21 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:21.778969 | orchestrator | 2026-03-30 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:24.827878 | orchestrator | 2026-03-30 00:56:24 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:24.829226 | orchestrator | 2026-03-30 00:56:24 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:24.831103 | orchestrator | 2026-03-30 00:56:24 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:24.831158 | orchestrator | 2026-03-30 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:27.882844 | orchestrator | 2026-03-30 00:56:27 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:27.883605 | orchestrator | 2026-03-30 00:56:27 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:27.884086 | orchestrator | 2026-03-30 00:56:27 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:27.884305 | orchestrator | 2026-03-30 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:30.921257 | orchestrator | 2026-03-30 00:56:30 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:30.924283 | orchestrator | 2026-03-30 00:56:30 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:30.926724 | orchestrator | 2026-03-30 00:56:30 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:30.926788 | orchestrator | 2026-03-30 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:33.971776 | orchestrator | 2026-03-30 00:56:33 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state STARTED 2026-03-30 00:56:33.973686 | orchestrator | 2026-03-30 00:56:33 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:33.977441 | orchestrator | 2026-03-30 00:56:33 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:33.977501 | orchestrator | 2026-03-30 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:37.032883 | orchestrator | 2026-03-30 00:56:37.032945 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-30 00:56:37.032961 | orchestrator | 2.16.14 2026-03-30 00:56:37.032969 | orchestrator | 2026-03-30 00:56:37.032975 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-30 00:56:37.032981 | orchestrator | 2026-03-30 00:56:37.032987 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-30 00:56:37.032993 | orchestrator | Monday 30 March 2026 00:54:43 +0000 (0:00:00.552) 0:00:00.552 ********** 2026-03-30 00:56:37.032999 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:56:37.033006 | orchestrator | 2026-03-30 00:56:37.033011 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-30 00:56:37.033017 | orchestrator | Monday 30 March 2026 00:54:43 +0000 (0:00:00.605) 0:00:01.157 ********** 2026-03-30 00:56:37.033023 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033029 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033035 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033041 | orchestrator | 2026-03-30 00:56:37.033047 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-30 00:56:37.033053 | orchestrator | Monday 30 March 2026 00:54:44 +0000 (0:00:00.912) 0:00:02.070 ********** 2026-03-30 00:56:37.033059 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033065 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033072 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033078 | orchestrator | 2026-03-30 00:56:37.033084 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-30 00:56:37.033091 | orchestrator | Monday 30 March 2026 00:54:44 +0000 (0:00:00.292) 0:00:02.363 ********** 2026-03-30 00:56:37.033097 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033103 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033110 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033130 | orchestrator | 2026-03-30 00:56:37.033137 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-30 00:56:37.033143 | orchestrator | Monday 30 March 2026 00:54:45 +0000 (0:00:00.930) 0:00:03.293 ********** 2026-03-30 00:56:37.033150 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033156 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033160 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033164 | orchestrator | 2026-03-30 00:56:37.033168 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-30 00:56:37.033172 | orchestrator | Monday 30 March 2026 00:54:46 +0000 (0:00:00.308) 0:00:03.602 ********** 2026-03-30 00:56:37.033175 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033179 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033183 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033187 | orchestrator | 2026-03-30 00:56:37.033190 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-30 00:56:37.033202 | orchestrator | Monday 30 March 2026 00:54:46 +0000 (0:00:00.279) 0:00:03.881 ********** 2026-03-30 00:56:37.033205 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033209 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033213 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033217 | orchestrator | 2026-03-30 00:56:37.033221 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-30 00:56:37.033225 | orchestrator | Monday 30 March 2026 00:54:46 +0000 (0:00:00.325) 0:00:04.207 ********** 2026-03-30 00:56:37.033228 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.033233 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.033237 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.033240 | orchestrator | 2026-03-30 00:56:37.033244 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-30 00:56:37.033249 | orchestrator | Monday 30 March 2026 00:54:47 +0000 (0:00:00.453) 0:00:04.660 ********** 2026-03-30 00:56:37.033255 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033261 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033376 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033393 | orchestrator | 2026-03-30 00:56:37.033399 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-30 00:56:37.033406 | orchestrator | Monday 30 March 2026 00:54:47 +0000 (0:00:00.289) 0:00:04.950 ********** 2026-03-30 00:56:37.033413 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:56:37.033419 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:56:37.033425 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:56:37.033431 | orchestrator | 2026-03-30 00:56:37.033435 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-30 00:56:37.033439 | orchestrator | Monday 30 March 2026 00:54:48 +0000 (0:00:00.626) 0:00:05.576 ********** 2026-03-30 00:56:37.033442 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033446 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033450 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033454 | orchestrator | 2026-03-30 00:56:37.033457 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-30 00:56:37.033461 | orchestrator | Monday 30 March 2026 00:54:48 +0000 (0:00:00.418) 0:00:05.994 ********** 2026-03-30 00:56:37.033465 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:56:37.033469 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:56:37.033472 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:56:37.033476 | orchestrator | 2026-03-30 00:56:37.033480 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-30 00:56:37.033484 | orchestrator | Monday 30 March 2026 00:54:51 +0000 (0:00:03.061) 0:00:09.056 ********** 2026-03-30 00:56:37.033494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-30 00:56:37.033498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-30 00:56:37.033501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-30 00:56:37.033505 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.033509 | orchestrator | 2026-03-30 00:56:37.033522 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-30 00:56:37.033526 | orchestrator | Monday 30 March 2026 00:54:52 +0000 (0:00:00.407) 0:00:09.463 ********** 2026-03-30 00:56:37.033531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.033537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.033541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.033545 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.033549 | orchestrator | 2026-03-30 00:56:37.033552 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-30 00:56:37.033556 | orchestrator | Monday 30 March 2026 00:54:52 +0000 (0:00:00.772) 0:00:10.235 ********** 2026-03-30 00:56:37.033563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.033577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.033585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.033592 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.033598 | orchestrator | 2026-03-30 00:56:37.033604 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-30 00:56:37.033611 | orchestrator | Monday 30 March 2026 00:54:52 +0000 (0:00:00.162) 0:00:10.398 ********** 2026-03-30 00:56:37.033618 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c6897091ef82', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-30 00:54:49.529015', 'end': '2026-03-30 00:54:49.566967', 'delta': '0:00:00.037952', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6897091ef82'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-30 00:56:37.033631 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8bf5889159fb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-30 00:54:50.591967', 'end': '2026-03-30 00:54:50.630955', 'delta': '0:00:00.038988', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bf5889159fb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-30 00:56:37.033640 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd2bd5a9c760e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-30 00:54:51.436623', 'end': '2026-03-30 00:54:51.480027', 'delta': '0:00:00.043404', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d2bd5a9c760e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-30 00:56:37.033644 | orchestrator | 2026-03-30 00:56:37.033648 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-30 00:56:37.033652 | orchestrator | Monday 30 March 2026 00:54:53 +0000 (0:00:00.359) 0:00:10.757 ********** 2026-03-30 00:56:37.033655 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.033659 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.033663 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.033667 | orchestrator | 2026-03-30 00:56:37.033670 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-30 00:56:37.033674 | orchestrator | Monday 30 March 2026 00:54:53 +0000 (0:00:00.427) 0:00:11.185 ********** 2026-03-30 00:56:37.033678 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-30 00:56:37.033682 | orchestrator | 2026-03-30 00:56:37.033685 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-30 00:56:37.033689 | orchestrator | Monday 30 March 2026 00:54:55 +0000 (0:00:01.342) 0:00:12.528 ********** 2026-03-30 00:56:37.033932 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.033941 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.033945 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.033949 | orchestrator | 2026-03-30 00:56:37.033953 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-30 00:56:37.033957 | orchestrator | Monday 30 March 2026 00:54:55 +0000 (0:00:00.277) 0:00:12.805 ********** 2026-03-30 00:56:37.033961 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.033964 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.033968 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.033972 | orchestrator | 2026-03-30 00:56:37.033997 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-30 00:56:37.034006 | orchestrator | Monday 30 March 2026 00:54:55 +0000 (0:00:00.417) 0:00:13.223 ********** 2026-03-30 00:56:37.034010 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034039 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034043 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034047 | orchestrator | 2026-03-30 00:56:37.034051 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-30 00:56:37.034055 | orchestrator | Monday 30 March 2026 00:54:56 +0000 (0:00:00.456) 0:00:13.679 ********** 2026-03-30 00:56:37.034058 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.034067 | orchestrator | 2026-03-30 00:56:37.034070 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-30 00:56:37.034074 | orchestrator | Monday 30 March 2026 00:54:56 +0000 (0:00:00.158) 0:00:13.838 ********** 2026-03-30 00:56:37.034078 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034082 | orchestrator | 2026-03-30 00:56:37.034085 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-30 00:56:37.034089 | orchestrator | Monday 30 March 2026 00:54:56 +0000 (0:00:00.243) 0:00:14.082 ********** 2026-03-30 00:56:37.034093 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034097 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034101 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034104 | orchestrator | 2026-03-30 00:56:37.034108 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-30 00:56:37.034112 | orchestrator | Monday 30 March 2026 00:54:56 +0000 (0:00:00.274) 0:00:14.356 ********** 2026-03-30 00:56:37.034116 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034119 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034123 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034127 | orchestrator | 2026-03-30 00:56:37.034131 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-30 00:56:37.034134 | orchestrator | Monday 30 March 2026 00:54:57 +0000 (0:00:00.335) 0:00:14.691 ********** 2026-03-30 00:56:37.034138 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034142 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034146 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034149 | orchestrator | 2026-03-30 00:56:37.034153 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-30 00:56:37.034157 | orchestrator | Monday 30 March 2026 00:54:57 +0000 (0:00:00.468) 0:00:15.160 ********** 2026-03-30 00:56:37.034161 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034164 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034168 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034172 | orchestrator | 2026-03-30 00:56:37.034175 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-30 00:56:37.034179 | orchestrator | Monday 30 March 2026 00:54:58 +0000 (0:00:00.309) 0:00:15.469 ********** 2026-03-30 00:56:37.034183 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034187 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034190 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034194 | orchestrator | 2026-03-30 00:56:37.034198 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-30 00:56:37.034202 | orchestrator | Monday 30 March 2026 00:54:58 +0000 (0:00:00.321) 0:00:15.791 ********** 2026-03-30 00:56:37.034205 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034209 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034213 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034232 | orchestrator | 2026-03-30 00:56:37.034237 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-30 00:56:37.034241 | orchestrator | Monday 30 March 2026 00:54:58 +0000 (0:00:00.309) 0:00:16.100 ********** 2026-03-30 00:56:37.034244 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034248 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034252 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034256 | orchestrator | 2026-03-30 00:56:37.034259 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-30 00:56:37.034263 | orchestrator | Monday 30 March 2026 00:54:59 +0000 (0:00:00.498) 0:00:16.598 ********** 2026-03-30 00:56:37.034268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17', 'dm-uuid-LVM-VhndrP4JRm6lMg7AksZ6FMYg6vrongntBhq8Y3ZdFP38yXbqOmpgRG5EKvABQxIM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237', 'dm-uuid-LVM-clt1Fc1mc6DYo8CIRrVyGxkMSuH2Bqi8CEXbm2O1oeU38EcT3HRspLHVcLRRRQHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T52kc9-Ldma-uyoF-foMM-TKOt-Q6hL-lWcd0W', 'scsi-0QEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543', 'scsi-SQEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3NrJ0N-c5A3-kVeB-M2yt-WKbg-feCR-MQO7Hq', 'scsi-0QEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf', 'scsi-SQEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f', 'scsi-SQEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2', 'dm-uuid-LVM-7NSr7HCCIWNL8JT5s5DWeooLgm1tLA0wkqWH4bB8nx79cAMWo0Aep8fPbkrkd7aU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3', 'dm-uuid-LVM-6Z6bNPd3WmujtOY3ALBxsjXmhQv6S7FTLwE049uSuyJ2dYpnlJy7LWfD7MstxUzI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034445 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dwZ7EB-kVjX-n0aN-8G5X-2Diw-sf1q-CJJtQ3', 'scsi-0QEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a', 'scsi-SQEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cj2p5T-0MVX-qd8p-rpkg-503F-bgsJ-8eRiJ0', 'scsi-0QEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec', 'scsi-SQEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f', 'dm-uuid-LVM-2uuZCCeT9vVzXmKcCJCigXK8qGQm5Z9cANJbXZ2Z566G6B1fCFamf4KR5cElvaU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a', 'scsi-SQEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4', 'dm-uuid-LVM-kNEger4NY8CmZGRArGu8wpScmnkCU4EBN6oEYN0TVN8CaN3dJgrQY1Cm14otlkFv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handl2026-03-30 00:56:37 | INFO  | Task c3f766cf-4dc4-495a-9c3e-695f2a42b453 is in state SUCCESS 2026-03-30 00:56:37.034529 | orchestrator | e': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034533 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-30 00:56:37.034577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b3ctSr-4ZBD-cg4d-Gfjf-hW1b-OTXp-B4dAW8', 'scsi-0QEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c', 'scsi-SQEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3HGldB-UGB3-nQU2-IT0R-Q7hS-Dpi6-YOBzBS', 'scsi-0QEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74', 'scsi-SQEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57', 'scsi-SQEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-30 00:56:37.034606 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034609 | orchestrator | 2026-03-30 00:56:37.034613 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-30 00:56:37.034617 | orchestrator | Monday 30 March 2026 00:54:59 +0000 (0:00:00.631) 0:00:17.229 ********** 2026-03-30 00:56:37.034622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17', 'dm-uuid-LVM-VhndrP4JRm6lMg7AksZ6FMYg6vrongntBhq8Y3ZdFP38yXbqOmpgRG5EKvABQxIM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237', 'dm-uuid-LVM-clt1Fc1mc6DYo8CIRrVyGxkMSuH2Bqi8CEXbm2O1oeU38EcT3HRspLHVcLRRRQHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16', 'scsi-SQEMU_QEMU_HARDDISK_453a1142-2b55-4cf7-822b-e97736e949f0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2', 'dm-uuid-LVM-7NSr7HCCIWNL8JT5s5DWeooLgm1tLA0wkqWH4bB8nx79cAMWo0Aep8fPbkrkd7aU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8f4fd2da--a001--5de7--aa88--1349b3eb3c17-osd--block--8f4fd2da--a001--5de7--aa88--1349b3eb3c17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T52kc9-Ldma-uyoF-foMM-TKOt-Q6hL-lWcd0W', 'scsi-0QEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543', 'scsi-SQEMU_QEMU_HARDDISK_482d2c36-c609-4f47-a0c5-2f5f73693543'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--deb01b05--78a2--5c26--94fe--c042bb294237-osd--block--deb01b05--78a2--5c26--94fe--c042bb294237'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3NrJ0N-c5A3-kVeB-M2yt-WKbg-feCR-MQO7Hq', 'scsi-0QEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf', 'scsi-SQEMU_QEMU_HARDDISK_8036b2a3-a86f-46db-9367-e2397ecc6abf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3', 'dm-uuid-LVM-6Z6bNPd3WmujtOY3ALBxsjXmhQv6S7FTLwE049uSuyJ2dYpnlJy7LWfD7MstxUzI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f', 'scsi-SQEMU_QEMU_HARDDISK_11718c35-ee93-4e01-b68e-0ea3ca8f5a3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034727 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.034732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034755 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f', 'dm-uuid-LVM-2uuZCCeT9vVzXmKcCJCigXK8qGQm5Z9cANJbXZ2Z566G6B1fCFamf4KR5cElvaU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4', 'dm-uuid-LVM-kNEger4NY8CmZGRArGu8wpScmnkCU4EBN6oEYN0TVN8CaN3dJgrQY1Cm14otlkFv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16', 'scsi-SQEMU_QEMU_HARDDISK_1826e9b5-14e8-452e-be3c-21e3cc09cbbf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034797 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3e5d1498--d7a5--5a93--a004--d1785e71aab2-osd--block--3e5d1498--d7a5--5a93--a004--d1785e71aab2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dwZ7EB-kVjX-n0aN-8G5X-2Diw-sf1q-CJJtQ3', 'scsi-0QEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a', 'scsi-SQEMU_QEMU_HARDDISK_e10eeafd-2903-4790-b7e1-aa168837035a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ae410091--a002--50e8--b50c--29c9b1a933c3-osd--block--ae410091--a002--50e8--b50c--29c9b1a933c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cj2p5T-0MVX-qd8p-rpkg-503F-bgsJ-8eRiJ0', 'scsi-0QEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec', 'scsi-SQEMU_QEMU_HARDDISK_cc358305-34de-4116-8302-212671220cec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a', 'scsi-SQEMU_QEMU_HARDDISK_f4b6223c-7e5a-4bfd-b745-cff7b69b076a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034832 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.034836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034845 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034851 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034858 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d0abad1-e3c2-4c21-a543-5ea974ffa3d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034865 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f-osd--block--6dc98b08--79a1--56b1--a9a0--4cf05631fa6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b3ctSr-4ZBD-cg4d-Gfjf-hW1b-OTXp-B4dAW8', 'scsi-0QEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c', 'scsi-SQEMU_QEMU_HARDDISK_73772ae7-f59b-43b9-ae4a-d5ef866e883c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5c90778--4ce0--5f2b--bfca--518c358a14f4-osd--block--b5c90778--4ce0--5f2b--bfca--518c358a14f4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3HGldB-UGB3-nQU2-IT0R-Q7hS-Dpi6-YOBzBS', 'scsi-0QEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74', 'scsi-SQEMU_QEMU_HARDDISK_6acc619e-8818-4e1c-86d6-dab030db0f74'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57', 'scsi-SQEMU_QEMU_HARDDISK_06283a56-3f29-4145-9845-ba3e73029c57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-30-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-30 00:56:37.034904 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.034910 | orchestrator | 2026-03-30 00:56:37.034916 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-30 00:56:37.034922 | orchestrator | Monday 30 March 2026 00:55:00 +0000 (0:00:00.625) 0:00:17.855 ********** 2026-03-30 00:56:37.034928 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.034934 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.034940 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.034945 | orchestrator | 2026-03-30 00:56:37.034951 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-30 00:56:37.034957 | orchestrator | Monday 30 March 2026 00:55:01 +0000 (0:00:00.708) 0:00:18.563 ********** 2026-03-30 00:56:37.034964 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.034970 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.034976 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.034982 | orchestrator | 2026-03-30 00:56:37.034988 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-30 00:56:37.034995 | orchestrator | Monday 30 March 2026 00:55:01 +0000 (0:00:00.466) 0:00:19.030 ********** 2026-03-30 00:56:37.035000 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.035006 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.035013 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.035019 | orchestrator | 2026-03-30 00:56:37.035025 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-30 00:56:37.035036 | orchestrator | Monday 30 March 2026 00:55:02 +0000 (0:00:00.723) 0:00:19.753 ********** 2026-03-30 00:56:37.035042 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035049 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035055 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035061 | orchestrator | 2026-03-30 00:56:37.035067 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-30 00:56:37.035071 | orchestrator | Monday 30 March 2026 00:55:02 +0000 (0:00:00.289) 0:00:20.042 ********** 2026-03-30 00:56:37.035075 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035079 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035082 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035086 | orchestrator | 2026-03-30 00:56:37.035092 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-30 00:56:37.035096 | orchestrator | Monday 30 March 2026 00:55:02 +0000 (0:00:00.403) 0:00:20.446 ********** 2026-03-30 00:56:37.035100 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035104 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035108 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035111 | orchestrator | 2026-03-30 00:56:37.035115 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-30 00:56:37.035119 | orchestrator | Monday 30 March 2026 00:55:03 +0000 (0:00:00.468) 0:00:20.914 ********** 2026-03-30 00:56:37.035123 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-30 00:56:37.035127 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-30 00:56:37.035130 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-30 00:56:37.035134 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-30 00:56:37.035138 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-30 00:56:37.035142 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-30 00:56:37.035146 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-30 00:56:37.035149 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-30 00:56:37.035153 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-30 00:56:37.035157 | orchestrator | 2026-03-30 00:56:37.035161 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-30 00:56:37.035164 | orchestrator | Monday 30 March 2026 00:55:04 +0000 (0:00:00.905) 0:00:21.820 ********** 2026-03-30 00:56:37.035168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-30 00:56:37.035172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-30 00:56:37.035176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-30 00:56:37.035179 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-30 00:56:37.035187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-30 00:56:37.035191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-30 00:56:37.035194 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035198 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-30 00:56:37.035202 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-30 00:56:37.035205 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-30 00:56:37.035209 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035213 | orchestrator | 2026-03-30 00:56:37.035217 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-30 00:56:37.035220 | orchestrator | Monday 30 March 2026 00:55:04 +0000 (0:00:00.352) 0:00:22.172 ********** 2026-03-30 00:56:37.035225 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 00:56:37.035229 | orchestrator | 2026-03-30 00:56:37.035236 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-30 00:56:37.035257 | orchestrator | Monday 30 March 2026 00:55:05 +0000 (0:00:00.755) 0:00:22.928 ********** 2026-03-30 00:56:37.035262 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035265 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035269 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035273 | orchestrator | 2026-03-30 00:56:37.035277 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-30 00:56:37.035280 | orchestrator | Monday 30 March 2026 00:55:05 +0000 (0:00:00.308) 0:00:23.237 ********** 2026-03-30 00:56:37.035284 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035288 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035292 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035296 | orchestrator | 2026-03-30 00:56:37.035299 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-30 00:56:37.035303 | orchestrator | Monday 30 March 2026 00:55:06 +0000 (0:00:00.277) 0:00:23.515 ********** 2026-03-30 00:56:37.035307 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035311 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035355 | orchestrator | skipping: [testbed-node-5] 2026-03-30 00:56:37.035361 | orchestrator | 2026-03-30 00:56:37.035367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-30 00:56:37.035371 | orchestrator | Monday 30 March 2026 00:55:06 +0000 (0:00:00.298) 0:00:23.813 ********** 2026-03-30 00:56:37.035375 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.035378 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.035382 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.035394 | orchestrator | 2026-03-30 00:56:37.035398 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-30 00:56:37.035402 | orchestrator | Monday 30 March 2026 00:55:06 +0000 (0:00:00.579) 0:00:24.393 ********** 2026-03-30 00:56:37.035405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:56:37.035409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:56:37.035413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:56:37.035417 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035420 | orchestrator | 2026-03-30 00:56:37.035424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-30 00:56:37.035428 | orchestrator | Monday 30 March 2026 00:55:07 +0000 (0:00:00.369) 0:00:24.763 ********** 2026-03-30 00:56:37.035432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:56:37.035436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:56:37.035439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:56:37.035443 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035447 | orchestrator | 2026-03-30 00:56:37.035451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-30 00:56:37.035457 | orchestrator | Monday 30 March 2026 00:55:07 +0000 (0:00:00.357) 0:00:25.121 ********** 2026-03-30 00:56:37.035461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-30 00:56:37.035465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-30 00:56:37.035468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-30 00:56:37.035472 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035476 | orchestrator | 2026-03-30 00:56:37.035480 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-30 00:56:37.035484 | orchestrator | Monday 30 March 2026 00:55:08 +0000 (0:00:00.354) 0:00:25.475 ********** 2026-03-30 00:56:37.035487 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:56:37.035491 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:56:37.035495 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:56:37.035499 | orchestrator | 2026-03-30 00:56:37.035502 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-30 00:56:37.035506 | orchestrator | Monday 30 March 2026 00:55:08 +0000 (0:00:00.325) 0:00:25.800 ********** 2026-03-30 00:56:37.035515 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-30 00:56:37.035519 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-30 00:56:37.035522 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-30 00:56:37.035526 | orchestrator | 2026-03-30 00:56:37.035530 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-30 00:56:37.035534 | orchestrator | Monday 30 March 2026 00:55:08 +0000 (0:00:00.496) 0:00:26.296 ********** 2026-03-30 00:56:37.035538 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:56:37.035541 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:56:37.035545 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:56:37.035549 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-30 00:56:37.035553 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-30 00:56:37.035556 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-30 00:56:37.035560 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-30 00:56:37.035564 | orchestrator | 2026-03-30 00:56:37.035568 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-30 00:56:37.035572 | orchestrator | Monday 30 March 2026 00:55:09 +0000 (0:00:00.967) 0:00:27.264 ********** 2026-03-30 00:56:37.035575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-30 00:56:37.035579 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-30 00:56:37.035583 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-30 00:56:37.035587 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-30 00:56:37.035593 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-30 00:56:37.035604 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-30 00:56:37.035608 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-30 00:56:37.035612 | orchestrator | 2026-03-30 00:56:37.035615 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-30 00:56:37.035619 | orchestrator | Monday 30 March 2026 00:55:11 +0000 (0:00:01.878) 0:00:29.143 ********** 2026-03-30 00:56:37.035623 | orchestrator | skipping: [testbed-node-3] 2026-03-30 00:56:37.035627 | orchestrator | skipping: [testbed-node-4] 2026-03-30 00:56:37.035631 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-30 00:56:37.035634 | orchestrator | 2026-03-30 00:56:37.035638 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-30 00:56:37.035642 | orchestrator | Monday 30 March 2026 00:55:12 +0000 (0:00:00.378) 0:00:29.522 ********** 2026-03-30 00:56:37.035646 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:56:37.035650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:56:37.035654 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:56:37.035661 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:56:37.035667 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-30 00:56:37.035671 | orchestrator | 2026-03-30 00:56:37.035675 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-30 00:56:37.035679 | orchestrator | Monday 30 March 2026 00:55:49 +0000 (0:00:37.195) 0:01:06.717 ********** 2026-03-30 00:56:37.035683 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035686 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035690 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035694 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035697 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035701 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035705 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-30 00:56:37.035709 | orchestrator | 2026-03-30 00:56:37.035712 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-30 00:56:37.035716 | orchestrator | Monday 30 March 2026 00:56:06 +0000 (0:00:17.695) 0:01:24.413 ********** 2026-03-30 00:56:37.035720 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035723 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035727 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035731 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035734 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035738 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035742 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-30 00:56:37.035746 | orchestrator | 2026-03-30 00:56:37.035749 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-30 00:56:37.035753 | orchestrator | Monday 30 March 2026 00:56:15 +0000 (0:00:08.937) 0:01:33.350 ********** 2026-03-30 00:56:37.035757 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035760 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:56:37.035764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:56:37.035770 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035774 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:56:37.035778 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:56:37.035781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035785 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:56:37.035789 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:56:37.035793 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035799 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:56:37.035809 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:56:37.035814 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035817 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:56:37.035821 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:56:37.035825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-30 00:56:37.035829 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-30 00:56:37.035832 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-30 00:56:37.035836 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-30 00:56:37.035840 | orchestrator | 2026-03-30 00:56:37.035844 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:56:37.035848 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-30 00:56:37.035852 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-30 00:56:37.035856 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-30 00:56:37.035860 | orchestrator | 2026-03-30 00:56:37.035863 | orchestrator | 2026-03-30 00:56:37.035867 | orchestrator | 2026-03-30 00:56:37.035871 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:56:37.035876 | orchestrator | Monday 30 March 2026 00:56:34 +0000 (0:00:18.191) 0:01:51.542 ********** 2026-03-30 00:56:37.035880 | orchestrator | =============================================================================== 2026-03-30 00:56:37.035884 | orchestrator | create openstack pool(s) ----------------------------------------------- 37.20s 2026-03-30 00:56:37.035888 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.19s 2026-03-30 00:56:37.035892 | orchestrator | generate keys ---------------------------------------------------------- 17.70s 2026-03-30 00:56:37.035895 | orchestrator | get keys from monitors -------------------------------------------------- 8.94s 2026-03-30 00:56:37.035899 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.06s 2026-03-30 00:56:37.035903 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.88s 2026-03-30 00:56:37.035907 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.34s 2026-03-30 00:56:37.035910 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2026-03-30 00:56:37.035914 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.93s 2026-03-30 00:56:37.035918 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.91s 2026-03-30 00:56:37.035921 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.91s 2026-03-30 00:56:37.035925 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2026-03-30 00:56:37.035929 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2026-03-30 00:56:37.035933 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2026-03-30 00:56:37.035937 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-03-30 00:56:37.035940 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2026-03-30 00:56:37.035944 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-03-30 00:56:37.035948 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2026-03-30 00:56:37.035952 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2026-03-30 00:56:37.035958 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2026-03-30 00:56:37.035962 | orchestrator | 2026-03-30 00:56:37 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:37.035966 | orchestrator | 2026-03-30 00:56:37 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:37.035970 | orchestrator | 2026-03-30 00:56:37 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:37.035976 | orchestrator | 2026-03-30 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:40.078475 | orchestrator | 2026-03-30 00:56:40 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:40.079755 | orchestrator | 2026-03-30 00:56:40 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:40.080365 | orchestrator | 2026-03-30 00:56:40 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:40.080403 | orchestrator | 2026-03-30 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:43.132232 | orchestrator | 2026-03-30 00:56:43 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:43.135627 | orchestrator | 2026-03-30 00:56:43 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:43.138346 | orchestrator | 2026-03-30 00:56:43 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:43.138411 | orchestrator | 2026-03-30 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:46.183946 | orchestrator | 2026-03-30 00:56:46 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:46.186422 | orchestrator | 2026-03-30 00:56:46 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:46.187680 | orchestrator | 2026-03-30 00:56:46 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:46.187887 | orchestrator | 2026-03-30 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:49.240173 | orchestrator | 2026-03-30 00:56:49 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:49.243458 | orchestrator | 2026-03-30 00:56:49 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:49.244192 | orchestrator | 2026-03-30 00:56:49 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:49.245611 | orchestrator | 2026-03-30 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:52.287407 | orchestrator | 2026-03-30 00:56:52 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:52.289151 | orchestrator | 2026-03-30 00:56:52 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:52.292208 | orchestrator | 2026-03-30 00:56:52 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:52.292380 | orchestrator | 2026-03-30 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:55.352513 | orchestrator | 2026-03-30 00:56:55 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:55.353596 | orchestrator | 2026-03-30 00:56:55 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:55.354516 | orchestrator | 2026-03-30 00:56:55 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:55.354569 | orchestrator | 2026-03-30 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:56:58.402905 | orchestrator | 2026-03-30 00:56:58 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:56:58.406321 | orchestrator | 2026-03-30 00:56:58 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:56:58.409831 | orchestrator | 2026-03-30 00:56:58 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:56:58.411283 | orchestrator | 2026-03-30 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:01.463203 | orchestrator | 2026-03-30 00:57:01 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:57:01.464904 | orchestrator | 2026-03-30 00:57:01 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:01.466148 | orchestrator | 2026-03-30 00:57:01 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:01.466196 | orchestrator | 2026-03-30 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:04.495085 | orchestrator | 2026-03-30 00:57:04 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:57:04.495931 | orchestrator | 2026-03-30 00:57:04 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:04.497357 | orchestrator | 2026-03-30 00:57:04 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:04.497388 | orchestrator | 2026-03-30 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:07.535786 | orchestrator | 2026-03-30 00:57:07 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:57:07.537921 | orchestrator | 2026-03-30 00:57:07 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:07.540550 | orchestrator | 2026-03-30 00:57:07 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:07.540604 | orchestrator | 2026-03-30 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:10.577692 | orchestrator | 2026-03-30 00:57:10 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state STARTED 2026-03-30 00:57:10.579458 | orchestrator | 2026-03-30 00:57:10 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:10.581264 | orchestrator | 2026-03-30 00:57:10 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:10.581393 | orchestrator | 2026-03-30 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:13.632312 | orchestrator | 2026-03-30 00:57:13 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:13.633818 | orchestrator | 2026-03-30 00:57:13 | INFO  | Task ad708306-aa27-4f17-b3a6-aeabf601f372 is in state SUCCESS 2026-03-30 00:57:13.635806 | orchestrator | 2026-03-30 00:57:13 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:13.637859 | orchestrator | 2026-03-30 00:57:13 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:13.637909 | orchestrator | 2026-03-30 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:16.685835 | orchestrator | 2026-03-30 00:57:16 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:16.685923 | orchestrator | 2026-03-30 00:57:16 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:16.687788 | orchestrator | 2026-03-30 00:57:16 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:16.687889 | orchestrator | 2026-03-30 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:19.736347 | orchestrator | 2026-03-30 00:57:19 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:19.738256 | orchestrator | 2026-03-30 00:57:19 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:19.739973 | orchestrator | 2026-03-30 00:57:19 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:19.740657 | orchestrator | 2026-03-30 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:22.781339 | orchestrator | 2026-03-30 00:57:22 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:22.783162 | orchestrator | 2026-03-30 00:57:22 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:22.784354 | orchestrator | 2026-03-30 00:57:22 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:22.784547 | orchestrator | 2026-03-30 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:25.839807 | orchestrator | 2026-03-30 00:57:25 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:25.842040 | orchestrator | 2026-03-30 00:57:25 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:25.844313 | orchestrator | 2026-03-30 00:57:25 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:25.845159 | orchestrator | 2026-03-30 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:28.888930 | orchestrator | 2026-03-30 00:57:28 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:28.890730 | orchestrator | 2026-03-30 00:57:28 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:28.892931 | orchestrator | 2026-03-30 00:57:28 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:28.892979 | orchestrator | 2026-03-30 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:31.950996 | orchestrator | 2026-03-30 00:57:31 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:31.952751 | orchestrator | 2026-03-30 00:57:31 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:31.954057 | orchestrator | 2026-03-30 00:57:31 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:31.954083 | orchestrator | 2026-03-30 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:34.988011 | orchestrator | 2026-03-30 00:57:34 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:34.989874 | orchestrator | 2026-03-30 00:57:34 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:34.990809 | orchestrator | 2026-03-30 00:57:34 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:34.990842 | orchestrator | 2026-03-30 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:38.027535 | orchestrator | 2026-03-30 00:57:38 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:38.027581 | orchestrator | 2026-03-30 00:57:38 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:38.028729 | orchestrator | 2026-03-30 00:57:38 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:38.029109 | orchestrator | 2026-03-30 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:41.088346 | orchestrator | 2026-03-30 00:57:41 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:41.089499 | orchestrator | 2026-03-30 00:57:41 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:41.091154 | orchestrator | 2026-03-30 00:57:41 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:41.091476 | orchestrator | 2026-03-30 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:44.134403 | orchestrator | 2026-03-30 00:57:44 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:44.137803 | orchestrator | 2026-03-30 00:57:44 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state STARTED 2026-03-30 00:57:44.139810 | orchestrator | 2026-03-30 00:57:44 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:44.140458 | orchestrator | 2026-03-30 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:47.184662 | orchestrator | 2026-03-30 00:57:47 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:47.187066 | orchestrator | 2026-03-30 00:57:47 | INFO  | Task 333210d8-4396-4dc0-9da2-86c2c8162974 is in state SUCCESS 2026-03-30 00:57:47.189054 | orchestrator | 2026-03-30 00:57:47.189118 | orchestrator | 2026-03-30 00:57:47.189128 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-30 00:57:47.189136 | orchestrator | 2026-03-30 00:57:47.189143 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-30 00:57:47.189205 | orchestrator | Monday 30 March 2026 00:56:37 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-30 00:57:47.189213 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-30 00:57:47.189221 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189227 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189234 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 00:57:47.189240 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189247 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-30 00:57:47.189254 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-30 00:57:47.189261 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-30 00:57:47.189330 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-30 00:57:47.189342 | orchestrator | 2026-03-30 00:57:47.189350 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-30 00:57:47.189356 | orchestrator | Monday 30 March 2026 00:56:42 +0000 (0:00:04.942) 0:00:05.172 ********** 2026-03-30 00:57:47.189362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-30 00:57:47.189403 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189569 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189582 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 00:57:47.189589 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189595 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-30 00:57:47.189627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-30 00:57:47.189635 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-30 00:57:47.189642 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-30 00:57:47.189648 | orchestrator | 2026-03-30 00:57:47.189655 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-30 00:57:47.189661 | orchestrator | Monday 30 March 2026 00:56:46 +0000 (0:00:04.006) 0:00:09.179 ********** 2026-03-30 00:57:47.189668 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-30 00:57:47.189675 | orchestrator | 2026-03-30 00:57:47.189682 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-30 00:57:47.189688 | orchestrator | Monday 30 March 2026 00:56:47 +0000 (0:00:01.043) 0:00:10.222 ********** 2026-03-30 00:57:47.189695 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-30 00:57:47.189701 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189708 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189715 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 00:57:47.189722 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189728 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-30 00:57:47.189735 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-30 00:57:47.189741 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-30 00:57:47.189747 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-30 00:57:47.189754 | orchestrator | 2026-03-30 00:57:47.189760 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-30 00:57:47.189766 | orchestrator | Monday 30 March 2026 00:57:02 +0000 (0:00:14.295) 0:00:24.518 ********** 2026-03-30 00:57:47.189772 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-30 00:57:47.189792 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-30 00:57:47.189800 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-30 00:57:47.189806 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-30 00:57:47.189826 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-30 00:57:47.189834 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-30 00:57:47.189840 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-30 00:57:47.189846 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-30 00:57:47.189853 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-30 00:57:47.189859 | orchestrator | 2026-03-30 00:57:47.189865 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-30 00:57:47.189871 | orchestrator | Monday 30 March 2026 00:57:05 +0000 (0:00:03.037) 0:00:27.555 ********** 2026-03-30 00:57:47.189879 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-30 00:57:47.189885 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189891 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189904 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 00:57:47.189911 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-30 00:57:47.189917 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-30 00:57:47.189924 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-30 00:57:47.189930 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-30 00:57:47.189936 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-30 00:57:47.189943 | orchestrator | 2026-03-30 00:57:47.189948 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:57:47.189955 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:57:47.189963 | orchestrator | 2026-03-30 00:57:47.189970 | orchestrator | 2026-03-30 00:57:47.189976 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:57:47.189982 | orchestrator | Monday 30 March 2026 00:57:11 +0000 (0:00:06.111) 0:00:33.667 ********** 2026-03-30 00:57:47.189988 | orchestrator | =============================================================================== 2026-03-30 00:57:47.189994 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.30s 2026-03-30 00:57:47.190001 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.11s 2026-03-30 00:57:47.190008 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.94s 2026-03-30 00:57:47.190060 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.01s 2026-03-30 00:57:47.190068 | orchestrator | Check if target directories exist --------------------------------------- 3.04s 2026-03-30 00:57:47.190074 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2026-03-30 00:57:47.190081 | orchestrator | 2026-03-30 00:57:47.190087 | orchestrator | 2026-03-30 00:57:47.190093 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:57:47.190100 | orchestrator | 2026-03-30 00:57:47.190105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:57:47.190112 | orchestrator | Monday 30 March 2026 00:56:07 +0000 (0:00:00.317) 0:00:00.318 ********** 2026-03-30 00:57:47.190119 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.190126 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.190132 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.190139 | orchestrator | 2026-03-30 00:57:47.190145 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:57:47.190188 | orchestrator | Monday 30 March 2026 00:56:07 +0000 (0:00:00.292) 0:00:00.610 ********** 2026-03-30 00:57:47.190197 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-30 00:57:47.190205 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-30 00:57:47.190213 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-30 00:57:47.190219 | orchestrator | 2026-03-30 00:57:47.190226 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-30 00:57:47.190232 | orchestrator | 2026-03-30 00:57:47.190238 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-30 00:57:47.190246 | orchestrator | Monday 30 March 2026 00:56:07 +0000 (0:00:00.274) 0:00:00.885 ********** 2026-03-30 00:57:47.190253 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:57:47.190260 | orchestrator | 2026-03-30 00:57:47.190267 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-30 00:57:47.190274 | orchestrator | Monday 30 March 2026 00:56:08 +0000 (0:00:00.596) 0:00:01.482 ********** 2026-03-30 00:57:47.190306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.190324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.190348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.190357 | orchestrator | 2026-03-30 00:57:47.190364 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-30 00:57:47.190379 | orchestrator | Monday 30 March 2026 00:56:09 +0000 (0:00:01.369) 0:00:02.851 ********** 2026-03-30 00:57:47.190386 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.190394 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.190400 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.190407 | orchestrator | 2026-03-30 00:57:47.190414 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-30 00:57:47.190421 | orchestrator | Monday 30 March 2026 00:56:10 +0000 (0:00:00.283) 0:00:03.135 ********** 2026-03-30 00:57:47.190428 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-30 00:57:47.190436 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-30 00:57:47.190442 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-30 00:57:47.190449 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-30 00:57:47.190456 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-30 00:57:47.190462 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-30 00:57:47.190470 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-30 00:57:47.190477 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-30 00:57:47.190484 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-30 00:57:47.190491 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-30 00:57:47.190502 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-30 00:57:47.190509 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-30 00:57:47.190516 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-30 00:57:47.190523 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-30 00:57:47.190530 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-30 00:57:47.190536 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-30 00:57:47.190544 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-30 00:57:47.190551 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-30 00:57:47.190561 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-30 00:57:47.190568 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-30 00:57:47.190575 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-30 00:57:47.190581 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-30 00:57:47.190591 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-30 00:57:47.190597 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-30 00:57:47.190605 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-30 00:57:47.190614 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-30 00:57:47.190620 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-30 00:57:47.190627 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-30 00:57:47.190634 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-30 00:57:47.190640 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-30 00:57:47.190646 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-30 00:57:47.190653 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-30 00:57:47.190660 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-30 00:57:47.190666 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-30 00:57:47.190673 | orchestrator | 2026-03-30 00:57:47.190679 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.190685 | orchestrator | Monday 30 March 2026 00:56:10 +0000 (0:00:00.696) 0:00:03.831 ********** 2026-03-30 00:57:47.190693 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.190699 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.190705 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.190712 | orchestrator | 2026-03-30 00:57:47.190718 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.190729 | orchestrator | Monday 30 March 2026 00:56:11 +0000 (0:00:00.482) 0:00:04.314 ********** 2026-03-30 00:57:47.190736 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.190743 | orchestrator | 2026-03-30 00:57:47.190749 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.190756 | orchestrator | Monday 30 March 2026 00:56:11 +0000 (0:00:00.122) 0:00:04.436 ********** 2026-03-30 00:57:47.190763 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.190769 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.190776 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.190783 | orchestrator | 2026-03-30 00:57:47.190790 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.190796 | orchestrator | Monday 30 March 2026 00:56:11 +0000 (0:00:00.272) 0:00:04.708 ********** 2026-03-30 00:57:47.190802 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.190808 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.190815 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.190821 | orchestrator | 2026-03-30 00:57:47.190828 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.190834 | orchestrator | Monday 30 March 2026 00:56:12 +0000 (0:00:00.286) 0:00:04.994 ********** 2026-03-30 00:57:47.190840 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.190847 | orchestrator | 2026-03-30 00:57:47.190853 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.190859 | orchestrator | Monday 30 March 2026 00:56:12 +0000 (0:00:00.105) 0:00:05.100 ********** 2026-03-30 00:57:47.190865 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.190872 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.190878 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.190884 | orchestrator | 2026-03-30 00:57:47.190890 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.190896 | orchestrator | Monday 30 March 2026 00:56:12 +0000 (0:00:00.424) 0:00:05.524 ********** 2026-03-30 00:57:47.190902 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.190909 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.190916 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.190922 | orchestrator | 2026-03-30 00:57:47.190929 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.190935 | orchestrator | Monday 30 March 2026 00:56:12 +0000 (0:00:00.292) 0:00:05.816 ********** 2026-03-30 00:57:47.190941 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.190948 | orchestrator | 2026-03-30 00:57:47.190955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.190962 | orchestrator | Monday 30 March 2026 00:56:12 +0000 (0:00:00.122) 0:00:05.939 ********** 2026-03-30 00:57:47.190968 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.190975 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.190981 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.190987 | orchestrator | 2026-03-30 00:57:47.190994 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191005 | orchestrator | Monday 30 March 2026 00:56:13 +0000 (0:00:00.283) 0:00:06.223 ********** 2026-03-30 00:57:47.191011 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191018 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191024 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191031 | orchestrator | 2026-03-30 00:57:47.191037 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191044 | orchestrator | Monday 30 March 2026 00:56:13 +0000 (0:00:00.277) 0:00:06.500 ********** 2026-03-30 00:57:47.191050 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191056 | orchestrator | 2026-03-30 00:57:47.191063 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191069 | orchestrator | Monday 30 March 2026 00:56:13 +0000 (0:00:00.103) 0:00:06.604 ********** 2026-03-30 00:57:47.191081 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191087 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191094 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191101 | orchestrator | 2026-03-30 00:57:47.191107 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191114 | orchestrator | Monday 30 March 2026 00:56:14 +0000 (0:00:00.456) 0:00:07.061 ********** 2026-03-30 00:57:47.191120 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191126 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191133 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191140 | orchestrator | 2026-03-30 00:57:47.191146 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191166 | orchestrator | Monday 30 March 2026 00:56:14 +0000 (0:00:00.296) 0:00:07.358 ********** 2026-03-30 00:57:47.191173 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191180 | orchestrator | 2026-03-30 00:57:47.191186 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191193 | orchestrator | Monday 30 March 2026 00:56:14 +0000 (0:00:00.120) 0:00:07.479 ********** 2026-03-30 00:57:47.191200 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191206 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191212 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191219 | orchestrator | 2026-03-30 00:57:47.191225 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191231 | orchestrator | Monday 30 March 2026 00:56:14 +0000 (0:00:00.267) 0:00:07.746 ********** 2026-03-30 00:57:47.191238 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191245 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191251 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191257 | orchestrator | 2026-03-30 00:57:47.191263 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191270 | orchestrator | Monday 30 March 2026 00:56:15 +0000 (0:00:00.312) 0:00:08.058 ********** 2026-03-30 00:57:47.191276 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191283 | orchestrator | 2026-03-30 00:57:47.191290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191330 | orchestrator | Monday 30 March 2026 00:56:15 +0000 (0:00:00.284) 0:00:08.343 ********** 2026-03-30 00:57:47.191339 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191346 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191353 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191359 | orchestrator | 2026-03-30 00:57:47.191365 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191372 | orchestrator | Monday 30 March 2026 00:56:15 +0000 (0:00:00.262) 0:00:08.606 ********** 2026-03-30 00:57:47.191379 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191386 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191393 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191399 | orchestrator | 2026-03-30 00:57:47.191405 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191411 | orchestrator | Monday 30 March 2026 00:56:15 +0000 (0:00:00.274) 0:00:08.880 ********** 2026-03-30 00:57:47.191417 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191425 | orchestrator | 2026-03-30 00:57:47.191431 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191438 | orchestrator | Monday 30 March 2026 00:56:16 +0000 (0:00:00.107) 0:00:08.988 ********** 2026-03-30 00:57:47.191445 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191451 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191457 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191464 | orchestrator | 2026-03-30 00:57:47.191471 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191477 | orchestrator | Monday 30 March 2026 00:56:16 +0000 (0:00:00.297) 0:00:09.285 ********** 2026-03-30 00:57:47.191484 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191496 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191502 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191508 | orchestrator | 2026-03-30 00:57:47.191515 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191522 | orchestrator | Monday 30 March 2026 00:56:16 +0000 (0:00:00.528) 0:00:09.813 ********** 2026-03-30 00:57:47.191529 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191535 | orchestrator | 2026-03-30 00:57:47.191542 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191549 | orchestrator | Monday 30 March 2026 00:56:16 +0000 (0:00:00.115) 0:00:09.929 ********** 2026-03-30 00:57:47.191555 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191562 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191568 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191574 | orchestrator | 2026-03-30 00:57:47.191581 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191588 | orchestrator | Monday 30 March 2026 00:56:17 +0000 (0:00:00.325) 0:00:10.255 ********** 2026-03-30 00:57:47.191599 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191606 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191613 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191619 | orchestrator | 2026-03-30 00:57:47.191626 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191633 | orchestrator | Monday 30 March 2026 00:56:17 +0000 (0:00:00.287) 0:00:10.542 ********** 2026-03-30 00:57:47.191638 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191645 | orchestrator | 2026-03-30 00:57:47.191658 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191665 | orchestrator | Monday 30 March 2026 00:56:17 +0000 (0:00:00.121) 0:00:10.664 ********** 2026-03-30 00:57:47.191672 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191679 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191686 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191692 | orchestrator | 2026-03-30 00:57:47.191700 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-30 00:57:47.191706 | orchestrator | Monday 30 March 2026 00:56:17 +0000 (0:00:00.275) 0:00:10.939 ********** 2026-03-30 00:57:47.191712 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:57:47.191719 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:57:47.191725 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:57:47.191732 | orchestrator | 2026-03-30 00:57:47.191738 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-30 00:57:47.191744 | orchestrator | Monday 30 March 2026 00:56:18 +0000 (0:00:00.453) 0:00:11.393 ********** 2026-03-30 00:57:47.191750 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191756 | orchestrator | 2026-03-30 00:57:47.191763 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-30 00:57:47.191770 | orchestrator | Monday 30 March 2026 00:56:18 +0000 (0:00:00.110) 0:00:11.503 ********** 2026-03-30 00:57:47.191776 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191783 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191789 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.191795 | orchestrator | 2026-03-30 00:57:47.191801 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-30 00:57:47.191807 | orchestrator | Monday 30 March 2026 00:56:18 +0000 (0:00:00.288) 0:00:11.792 ********** 2026-03-30 00:57:47.191813 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:57:47.191820 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:57:47.191826 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:57:47.191833 | orchestrator | 2026-03-30 00:57:47.191840 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-30 00:57:47.191846 | orchestrator | Monday 30 March 2026 00:56:20 +0000 (0:00:01.661) 0:00:13.453 ********** 2026-03-30 00:57:47.191853 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-30 00:57:47.191872 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-30 00:57:47.191879 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-30 00:57:47.191885 | orchestrator | 2026-03-30 00:57:47.191891 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-30 00:57:47.191897 | orchestrator | Monday 30 March 2026 00:56:22 +0000 (0:00:02.011) 0:00:15.465 ********** 2026-03-30 00:57:47.191905 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-30 00:57:47.191912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-30 00:57:47.191919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-30 00:57:47.191925 | orchestrator | 2026-03-30 00:57:47.191932 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-30 00:57:47.191938 | orchestrator | Monday 30 March 2026 00:56:24 +0000 (0:00:01.970) 0:00:17.435 ********** 2026-03-30 00:57:47.191945 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-30 00:57:47.191952 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-30 00:57:47.191958 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-30 00:57:47.191966 | orchestrator | 2026-03-30 00:57:47.191973 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-30 00:57:47.191979 | orchestrator | Monday 30 March 2026 00:56:26 +0000 (0:00:01.636) 0:00:19.071 ********** 2026-03-30 00:57:47.191986 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.191992 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.191998 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.192006 | orchestrator | 2026-03-30 00:57:47.192013 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-30 00:57:47.192019 | orchestrator | Monday 30 March 2026 00:56:26 +0000 (0:00:00.286) 0:00:19.358 ********** 2026-03-30 00:57:47.192025 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.192032 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.192038 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.192045 | orchestrator | 2026-03-30 00:57:47.192052 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-30 00:57:47.192058 | orchestrator | Monday 30 March 2026 00:56:26 +0000 (0:00:00.280) 0:00:19.639 ********** 2026-03-30 00:57:47.192065 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:57:47.192072 | orchestrator | 2026-03-30 00:57:47.192078 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-30 00:57:47.192085 | orchestrator | Monday 30 March 2026 00:56:27 +0000 (0:00:00.733) 0:00:20.373 ********** 2026-03-30 00:57:47.192108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.192126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.192140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.192246 | orchestrator | 2026-03-30 00:57:47.192259 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-30 00:57:47.192266 | orchestrator | Monday 30 March 2026 00:56:28 +0000 (0:00:01.548) 0:00:21.921 ********** 2026-03-30 00:57:47.192285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:57:47.192293 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.192301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:57:47.192314 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.192333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:57:47.192347 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.192354 | orchestrator | 2026-03-30 00:57:47.192360 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-30 00:57:47.192366 | orchestrator | Monday 30 March 2026 00:56:29 +0000 (0:00:00.791) 0:00:22.713 ********** 2026-03-30 00:57:47.192374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:57:47.192381 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.192398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:57:47.192410 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.192417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-30 00:57:47.192424 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.192430 | orchestrator | 2026-03-30 00:57:47.192436 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-30 00:57:47.192443 | orchestrator | Monday 30 March 2026 00:56:30 +0000 (0:00:01.143) 0:00:23.856 ********** 2026-03-30 00:57:47.192459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.192473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.192489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-30 00:57:47.192503 | orchestrator | 2026-03-30 00:57:47.192509 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-30 00:57:47.192516 | orchestrator | Monday 30 March 2026 00:56:32 +0000 (0:00:01.487) 0:00:25.344 ********** 2026-03-30 00:57:47.192523 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:57:47.192529 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:57:47.192536 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:57:47.192542 | orchestrator | 2026-03-30 00:57:47.192548 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-30 00:57:47.192554 | orchestrator | Monday 30 March 2026 00:56:32 +0000 (0:00:00.306) 0:00:25.651 ********** 2026-03-30 00:57:47.192560 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:57:47.192567 | orchestrator | 2026-03-30 00:57:47.192572 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-30 00:57:47.192578 | orchestrator | Monday 30 March 2026 00:56:33 +0000 (0:00:00.687) 0:00:26.338 ********** 2026-03-30 00:57:47.192583 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:57:47.192590 | orchestrator | 2026-03-30 00:57:47.192596 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-30 00:57:47.192601 | orchestrator | Monday 30 March 2026 00:56:35 +0000 (0:00:02.475) 0:00:28.814 ********** 2026-03-30 00:57:47.192608 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:57:47.192613 | orchestrator | 2026-03-30 00:57:47.192619 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-30 00:57:47.192625 | orchestrator | Monday 30 March 2026 00:56:38 +0000 (0:00:02.421) 0:00:31.235 ********** 2026-03-30 00:57:47.192631 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:57:47.192637 | orchestrator | 2026-03-30 00:57:47.192643 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-30 00:57:47.192649 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:16.163) 0:00:47.399 ********** 2026-03-30 00:57:47.192656 | orchestrator | 2026-03-30 00:57:47.192662 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-30 00:57:47.192669 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:00.065) 0:00:47.464 ********** 2026-03-30 00:57:47.192685 | orchestrator | 2026-03-30 00:57:47.192692 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-30 00:57:47.192698 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:00.062) 0:00:47.527 ********** 2026-03-30 00:57:47.192704 | orchestrator | 2026-03-30 00:57:47.192710 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-30 00:57:47.192717 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:00.064) 0:00:47.591 ********** 2026-03-30 00:57:47.192724 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:57:47.192730 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:57:47.192736 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:57:47.192743 | orchestrator | 2026-03-30 00:57:47.192749 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:57:47.192756 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-30 00:57:47.192768 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-30 00:57:47.192774 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-30 00:57:47.192781 | orchestrator | 2026-03-30 00:57:47.192787 | orchestrator | 2026-03-30 00:57:47.192800 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:57:47.192807 | orchestrator | Monday 30 March 2026 00:57:44 +0000 (0:00:49.481) 0:01:37.073 ********** 2026-03-30 00:57:47.192813 | orchestrator | =============================================================================== 2026-03-30 00:57:47.192819 | orchestrator | horizon : Restart horizon container ------------------------------------ 49.48s 2026-03-30 00:57:47.192825 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.16s 2026-03-30 00:57:47.192832 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.48s 2026-03-30 00:57:47.192839 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.42s 2026-03-30 00:57:47.192845 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.01s 2026-03-30 00:57:47.192851 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.97s 2026-03-30 00:57:47.192858 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.66s 2026-03-30 00:57:47.192864 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.64s 2026-03-30 00:57:47.192871 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.55s 2026-03-30 00:57:47.192877 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.49s 2026-03-30 00:57:47.192883 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.37s 2026-03-30 00:57:47.192890 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.14s 2026-03-30 00:57:47.192896 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.79s 2026-03-30 00:57:47.192902 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-03-30 00:57:47.192908 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2026-03-30 00:57:47.192915 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-03-30 00:57:47.192921 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-03-30 00:57:47.192928 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-03-30 00:57:47.192934 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-03-30 00:57:47.192942 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2026-03-30 00:57:47.192948 | orchestrator | 2026-03-30 00:57:47 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:47.192963 | orchestrator | 2026-03-30 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:50.239184 | orchestrator | 2026-03-30 00:57:50 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:50.241383 | orchestrator | 2026-03-30 00:57:50 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:50.241632 | orchestrator | 2026-03-30 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:53.283558 | orchestrator | 2026-03-30 00:57:53 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:53.285380 | orchestrator | 2026-03-30 00:57:53 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:53.285441 | orchestrator | 2026-03-30 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:56.333510 | orchestrator | 2026-03-30 00:57:56 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:56.334813 | orchestrator | 2026-03-30 00:57:56 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:56.334873 | orchestrator | 2026-03-30 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:57:59.385738 | orchestrator | 2026-03-30 00:57:59 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:57:59.388207 | orchestrator | 2026-03-30 00:57:59 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:57:59.388362 | orchestrator | 2026-03-30 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:02.440504 | orchestrator | 2026-03-30 00:58:02 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:58:02.441989 | orchestrator | 2026-03-30 00:58:02 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:02.442106 | orchestrator | 2026-03-30 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:05.501911 | orchestrator | 2026-03-30 00:58:05 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:58:05.503343 | orchestrator | 2026-03-30 00:58:05 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:05.503385 | orchestrator | 2026-03-30 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:08.539815 | orchestrator | 2026-03-30 00:58:08 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state STARTED 2026-03-30 00:58:08.542179 | orchestrator | 2026-03-30 00:58:08 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:08.542232 | orchestrator | 2026-03-30 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:11.581826 | orchestrator | 2026-03-30 00:58:11 | INFO  | Task fc5ed33e-80fa-440e-beb0-011cedc567da is in state SUCCESS 2026-03-30 00:58:11.582256 | orchestrator | 2026-03-30 00:58:11 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:11.583269 | orchestrator | 2026-03-30 00:58:11 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:11.584429 | orchestrator | 2026-03-30 00:58:11 | INFO  | Task 06707f0f-54e5-432d-9d20-f38672e2c302 is in state STARTED 2026-03-30 00:58:11.585162 | orchestrator | 2026-03-30 00:58:11 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:11.585193 | orchestrator | 2026-03-30 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:14.612386 | orchestrator | 2026-03-30 00:58:14 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:14.612623 | orchestrator | 2026-03-30 00:58:14 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:14.613475 | orchestrator | 2026-03-30 00:58:14 | INFO  | Task 06707f0f-54e5-432d-9d20-f38672e2c302 is in state SUCCESS 2026-03-30 00:58:14.614225 | orchestrator | 2026-03-30 00:58:14 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:14.614279 | orchestrator | 2026-03-30 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:17.637734 | orchestrator | 2026-03-30 00:58:17 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:17.637819 | orchestrator | 2026-03-30 00:58:17 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:17.638398 | orchestrator | 2026-03-30 00:58:17 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:17.639021 | orchestrator | 2026-03-30 00:58:17 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:17.640477 | orchestrator | 2026-03-30 00:58:17 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:17.640550 | orchestrator | 2026-03-30 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:20.674530 | orchestrator | 2026-03-30 00:58:20 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:20.675072 | orchestrator | 2026-03-30 00:58:20 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:20.675524 | orchestrator | 2026-03-30 00:58:20 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:20.677514 | orchestrator | 2026-03-30 00:58:20 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:20.677568 | orchestrator | 2026-03-30 00:58:20 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:20.677579 | orchestrator | 2026-03-30 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:23.708693 | orchestrator | 2026-03-30 00:58:23 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:23.711369 | orchestrator | 2026-03-30 00:58:23 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:23.713011 | orchestrator | 2026-03-30 00:58:23 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:23.714773 | orchestrator | 2026-03-30 00:58:23 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:23.716737 | orchestrator | 2026-03-30 00:58:23 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:23.716773 | orchestrator | 2026-03-30 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:26.753233 | orchestrator | 2026-03-30 00:58:26 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:26.753713 | orchestrator | 2026-03-30 00:58:26 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:26.755694 | orchestrator | 2026-03-30 00:58:26 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:26.757082 | orchestrator | 2026-03-30 00:58:26 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:26.758220 | orchestrator | 2026-03-30 00:58:26 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:26.759492 | orchestrator | 2026-03-30 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:29.807949 | orchestrator | 2026-03-30 00:58:29 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:29.812691 | orchestrator | 2026-03-30 00:58:29 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:29.812738 | orchestrator | 2026-03-30 00:58:29 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:29.812743 | orchestrator | 2026-03-30 00:58:29 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:29.812747 | orchestrator | 2026-03-30 00:58:29 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:29.812751 | orchestrator | 2026-03-30 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:32.861349 | orchestrator | 2026-03-30 00:58:32 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:32.861424 | orchestrator | 2026-03-30 00:58:32 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:32.861430 | orchestrator | 2026-03-30 00:58:32 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:32.863315 | orchestrator | 2026-03-30 00:58:32 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:32.864191 | orchestrator | 2026-03-30 00:58:32 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:32.864242 | orchestrator | 2026-03-30 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:35.906548 | orchestrator | 2026-03-30 00:58:35 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:35.907682 | orchestrator | 2026-03-30 00:58:35 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:35.908453 | orchestrator | 2026-03-30 00:58:35 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:35.909987 | orchestrator | 2026-03-30 00:58:35 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:35.910588 | orchestrator | 2026-03-30 00:58:35 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:35.910633 | orchestrator | 2026-03-30 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:38.962880 | orchestrator | 2026-03-30 00:58:38 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:38.963985 | orchestrator | 2026-03-30 00:58:38 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:38.964624 | orchestrator | 2026-03-30 00:58:38 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:38.966530 | orchestrator | 2026-03-30 00:58:38 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:38.967257 | orchestrator | 2026-03-30 00:58:38 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state STARTED 2026-03-30 00:58:38.967361 | orchestrator | 2026-03-30 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:42.048085 | orchestrator | 2026-03-30 00:58:42.048181 | orchestrator | 2026-03-30 00:58:42.048195 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-30 00:58:42.049075 | orchestrator | 2026-03-30 00:58:42.049123 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-30 00:58:42.049131 | orchestrator | Monday 30 March 2026 00:57:14 +0000 (0:00:00.262) 0:00:00.262 ********** 2026-03-30 00:58:42.049136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-30 00:58:42.049163 | orchestrator | 2026-03-30 00:58:42.049167 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-30 00:58:42.049171 | orchestrator | Monday 30 March 2026 00:57:14 +0000 (0:00:00.204) 0:00:00.467 ********** 2026-03-30 00:58:42.049176 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-30 00:58:42.049181 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-30 00:58:42.049196 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-30 00:58:42.049201 | orchestrator | 2026-03-30 00:58:42.049205 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-30 00:58:42.049209 | orchestrator | Monday 30 March 2026 00:57:16 +0000 (0:00:01.486) 0:00:01.953 ********** 2026-03-30 00:58:42.049213 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-30 00:58:42.049217 | orchestrator | 2026-03-30 00:58:42.049224 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-30 00:58:42.049230 | orchestrator | Monday 30 March 2026 00:57:17 +0000 (0:00:01.058) 0:00:03.012 ********** 2026-03-30 00:58:42.049240 | orchestrator | changed: [testbed-manager] 2026-03-30 00:58:42.049248 | orchestrator | 2026-03-30 00:58:42.049254 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-30 00:58:42.049260 | orchestrator | Monday 30 March 2026 00:57:18 +0000 (0:00:00.811) 0:00:03.823 ********** 2026-03-30 00:58:42.049266 | orchestrator | changed: [testbed-manager] 2026-03-30 00:58:42.049271 | orchestrator | 2026-03-30 00:58:42.049278 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-30 00:58:42.049284 | orchestrator | Monday 30 March 2026 00:57:19 +0000 (0:00:00.885) 0:00:04.709 ********** 2026-03-30 00:58:42.049290 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-30 00:58:42.049297 | orchestrator | ok: [testbed-manager] 2026-03-30 00:58:42.049303 | orchestrator | 2026-03-30 00:58:42.049309 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-30 00:58:42.049314 | orchestrator | Monday 30 March 2026 00:57:59 +0000 (0:00:40.021) 0:00:44.730 ********** 2026-03-30 00:58:42.049321 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-30 00:58:42.049327 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-30 00:58:42.049332 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-30 00:58:42.049338 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-30 00:58:42.049345 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-30 00:58:42.049350 | orchestrator | 2026-03-30 00:58:42.049354 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-30 00:58:42.049358 | orchestrator | Monday 30 March 2026 00:58:03 +0000 (0:00:04.057) 0:00:48.788 ********** 2026-03-30 00:58:42.049362 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-30 00:58:42.049366 | orchestrator | 2026-03-30 00:58:42.049369 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-30 00:58:42.049374 | orchestrator | Monday 30 March 2026 00:58:03 +0000 (0:00:00.611) 0:00:49.399 ********** 2026-03-30 00:58:42.049378 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:58:42.049382 | orchestrator | 2026-03-30 00:58:42.049385 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-30 00:58:42.049389 | orchestrator | Monday 30 March 2026 00:58:03 +0000 (0:00:00.135) 0:00:49.535 ********** 2026-03-30 00:58:42.049393 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:58:42.049396 | orchestrator | 2026-03-30 00:58:42.049400 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-30 00:58:42.049404 | orchestrator | Monday 30 March 2026 00:58:04 +0000 (0:00:00.342) 0:00:49.877 ********** 2026-03-30 00:58:42.049408 | orchestrator | changed: [testbed-manager] 2026-03-30 00:58:42.049411 | orchestrator | 2026-03-30 00:58:42.049415 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-30 00:58:42.049424 | orchestrator | Monday 30 March 2026 00:58:05 +0000 (0:00:01.421) 0:00:51.298 ********** 2026-03-30 00:58:42.049428 | orchestrator | changed: [testbed-manager] 2026-03-30 00:58:42.049432 | orchestrator | 2026-03-30 00:58:42.049436 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-30 00:58:42.049439 | orchestrator | Monday 30 March 2026 00:58:06 +0000 (0:00:00.696) 0:00:51.995 ********** 2026-03-30 00:58:42.049443 | orchestrator | changed: [testbed-manager] 2026-03-30 00:58:42.049447 | orchestrator | 2026-03-30 00:58:42.049450 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-30 00:58:42.049454 | orchestrator | Monday 30 March 2026 00:58:06 +0000 (0:00:00.566) 0:00:52.561 ********** 2026-03-30 00:58:42.049458 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-30 00:58:42.049462 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-30 00:58:42.049466 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-30 00:58:42.049470 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-30 00:58:42.049474 | orchestrator | 2026-03-30 00:58:42.049478 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:58:42.049482 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 00:58:42.049487 | orchestrator | 2026-03-30 00:58:42.049491 | orchestrator | 2026-03-30 00:58:42.049558 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:58:42.049563 | orchestrator | Monday 30 March 2026 00:58:08 +0000 (0:00:01.507) 0:00:54.069 ********** 2026-03-30 00:58:42.049567 | orchestrator | =============================================================================== 2026-03-30 00:58:42.049571 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.02s 2026-03-30 00:58:42.049575 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.06s 2026-03-30 00:58:42.049589 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.51s 2026-03-30 00:58:42.049594 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.49s 2026-03-30 00:58:42.049597 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.42s 2026-03-30 00:58:42.049601 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.06s 2026-03-30 00:58:42.049611 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2026-03-30 00:58:42.049620 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-03-30 00:58:42.049623 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.70s 2026-03-30 00:58:42.049627 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.61s 2026-03-30 00:58:42.049631 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-03-30 00:58:42.049635 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.34s 2026-03-30 00:58:42.049638 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-03-30 00:58:42.049642 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-30 00:58:42.049646 | orchestrator | 2026-03-30 00:58:42.049650 | orchestrator | 2026-03-30 00:58:42.049653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:58:42.049657 | orchestrator | 2026-03-30 00:58:42.049661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:58:42.049664 | orchestrator | Monday 30 March 2026 00:58:11 +0000 (0:00:00.207) 0:00:00.207 ********** 2026-03-30 00:58:42.049668 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.049672 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.049676 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.049680 | orchestrator | 2026-03-30 00:58:42.049683 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:58:42.049695 | orchestrator | Monday 30 March 2026 00:58:11 +0000 (0:00:00.299) 0:00:00.507 ********** 2026-03-30 00:58:42.049699 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-30 00:58:42.049703 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-30 00:58:42.049707 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-30 00:58:42.049710 | orchestrator | 2026-03-30 00:58:42.049714 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-30 00:58:42.049718 | orchestrator | 2026-03-30 00:58:42.049722 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-30 00:58:42.049725 | orchestrator | Monday 30 March 2026 00:58:12 +0000 (0:00:00.470) 0:00:00.978 ********** 2026-03-30 00:58:42.049729 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.049733 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.049737 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.049740 | orchestrator | 2026-03-30 00:58:42.049744 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:58:42.049749 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:58:42.049753 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:58:42.049757 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:58:42.049761 | orchestrator | 2026-03-30 00:58:42.049764 | orchestrator | 2026-03-30 00:58:42.049768 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:58:42.049772 | orchestrator | Monday 30 March 2026 00:58:13 +0000 (0:00:01.005) 0:00:01.983 ********** 2026-03-30 00:58:42.049776 | orchestrator | =============================================================================== 2026-03-30 00:58:42.049779 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.01s 2026-03-30 00:58:42.049783 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-30 00:58:42.049787 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-30 00:58:42.049791 | orchestrator | 2026-03-30 00:58:42.049794 | orchestrator | 2026-03-30 00:58:42.049798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:58:42.049802 | orchestrator | 2026-03-30 00:58:42.049805 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:58:42.049809 | orchestrator | Monday 30 March 2026 00:56:07 +0000 (0:00:00.327) 0:00:00.327 ********** 2026-03-30 00:58:42.049813 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.049816 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.049820 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.049824 | orchestrator | 2026-03-30 00:58:42.049828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:58:42.049831 | orchestrator | Monday 30 March 2026 00:56:07 +0000 (0:00:00.292) 0:00:00.619 ********** 2026-03-30 00:58:42.049835 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-30 00:58:42.049839 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-30 00:58:42.049843 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-30 00:58:42.049847 | orchestrator | 2026-03-30 00:58:42.049851 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-30 00:58:42.049854 | orchestrator | 2026-03-30 00:58:42.049875 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-30 00:58:42.049879 | orchestrator | Monday 30 March 2026 00:56:08 +0000 (0:00:00.297) 0:00:00.917 ********** 2026-03-30 00:58:42.049883 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:58:42.049887 | orchestrator | 2026-03-30 00:58:42.049891 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-30 00:58:42.049898 | orchestrator | Monday 30 March 2026 00:56:08 +0000 (0:00:00.652) 0:00:01.570 ********** 2026-03-30 00:58:42.049910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.049920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.049927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.049934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.049961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050116 | orchestrator | 2026-03-30 00:58:42.050120 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-30 00:58:42.050124 | orchestrator | Monday 30 March 2026 00:56:10 +0000 (0:00:02.062) 0:00:03.633 ********** 2026-03-30 00:58:42.050128 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050132 | orchestrator | 2026-03-30 00:58:42.050135 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-30 00:58:42.050139 | orchestrator | Monday 30 March 2026 00:56:10 +0000 (0:00:00.115) 0:00:03.748 ********** 2026-03-30 00:58:42.050143 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050147 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050153 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050159 | orchestrator | 2026-03-30 00:58:42.050165 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-30 00:58:42.050171 | orchestrator | Monday 30 March 2026 00:56:11 +0000 (0:00:00.247) 0:00:03.995 ********** 2026-03-30 00:58:42.050177 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:58:42.050189 | orchestrator | 2026-03-30 00:58:42.050195 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-30 00:58:42.050201 | orchestrator | Monday 30 March 2026 00:56:11 +0000 (0:00:00.868) 0:00:04.864 ********** 2026-03-30 00:58:42.050206 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:58:42.050212 | orchestrator | 2026-03-30 00:58:42.050218 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-30 00:58:42.050253 | orchestrator | Monday 30 March 2026 00:56:12 +0000 (0:00:00.637) 0:00:05.501 ********** 2026-03-30 00:58:42.050265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050317 | orchestrator | 2026-03-30 00:58:42.050321 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-30 00:58:42.050325 | orchestrator | Monday 30 March 2026 00:56:15 +0000 (0:00:03.210) 0:00:08.711 ********** 2026-03-30 00:58:42.050329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050359 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050379 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050403 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050407 | orchestrator | 2026-03-30 00:58:42.050411 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-30 00:58:42.050415 | orchestrator | Monday 30 March 2026 00:56:16 +0000 (0:00:00.607) 0:00:09.318 ********** 2026-03-30 00:58:42.050419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050434 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keys2026-03-30 00:58:42 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:58:42.050447 | orchestrator | 2026-03-30 00:58:42 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:42.050451 | orchestrator | 2026-03-30 00:58:42 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:42.050456 | orchestrator | 2026-03-30 00:58:42 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:42.050460 | orchestrator | 2026-03-30 00:58:42 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:42.050464 | orchestrator | 2026-03-30 00:58:42 | INFO  | Task 00ec8c64-9ef6-4806-8cf5-582b7a29d522 is in state SUCCESS 2026-03-30 00:58:42.050471 | orchestrator | tone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050488 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050522 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050527 | orchestrator | 2026-03-30 00:58:42.050583 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-30 00:58:42.050588 | orchestrator | Monday 30 March 2026 00:56:17 +0000 (0:00:00.932) 0:00:10.251 ********** 2026-03-30 00:58:42.050592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050647 | orchestrator | 2026-03-30 00:58:42.050651 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-30 00:58:42.050655 | orchestrator | Monday 30 March 2026 00:56:20 +0000 (0:00:03.319) 0:00:13.570 ********** 2026-03-30 00:58:42.050670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.050716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.050741 | orchestrator | 2026-03-30 00:58:42.050744 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-30 00:58:42.050749 | orchestrator | Monday 30 March 2026 00:56:25 +0000 (0:00:04.950) 0:00:18.521 ********** 2026-03-30 00:58:42.050753 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.050756 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:58:42.050760 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:58:42.050764 | orchestrator | 2026-03-30 00:58:42.050768 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-30 00:58:42.050772 | orchestrator | Monday 30 March 2026 00:56:27 +0000 (0:00:01.405) 0:00:19.926 ********** 2026-03-30 00:58:42.050776 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050780 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050783 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050787 | orchestrator | 2026-03-30 00:58:42.050791 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-30 00:58:42.050795 | orchestrator | Monday 30 March 2026 00:56:27 +0000 (0:00:00.906) 0:00:20.833 ********** 2026-03-30 00:58:42.050799 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050803 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050808 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050811 | orchestrator | 2026-03-30 00:58:42.050815 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-30 00:58:42.050819 | orchestrator | Monday 30 March 2026 00:56:28 +0000 (0:00:00.290) 0:00:21.123 ********** 2026-03-30 00:58:42.050823 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050827 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050831 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050835 | orchestrator | 2026-03-30 00:58:42.050838 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-30 00:58:42.050842 | orchestrator | Monday 30 March 2026 00:56:28 +0000 (0:00:00.295) 0:00:21.418 ********** 2026-03-30 00:58:42.050851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050871 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050891 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-30 00:58:42.050906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-30 00:58:42.050910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-30 00:58:42.050914 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050918 | orchestrator | 2026-03-30 00:58:42.050922 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-30 00:58:42.050926 | orchestrator | Monday 30 March 2026 00:56:29 +0000 (0:00:00.571) 0:00:21.990 ********** 2026-03-30 00:58:42.050929 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.050934 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.050937 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.050941 | orchestrator | 2026-03-30 00:58:42.050947 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-30 00:58:42.050953 | orchestrator | Monday 30 March 2026 00:56:29 +0000 (0:00:00.457) 0:00:22.448 ********** 2026-03-30 00:58:42.050959 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-30 00:58:42.050966 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-30 00:58:42.050972 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-30 00:58:42.050978 | orchestrator | 2026-03-30 00:58:42.050985 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-30 00:58:42.050991 | orchestrator | Monday 30 March 2026 00:56:31 +0000 (0:00:01.758) 0:00:24.207 ********** 2026-03-30 00:58:42.050997 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:58:42.051004 | orchestrator | 2026-03-30 00:58:42.051008 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-30 00:58:42.051012 | orchestrator | Monday 30 March 2026 00:56:32 +0000 (0:00:00.930) 0:00:25.137 ********** 2026-03-30 00:58:42.051016 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051019 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.051024 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.051028 | orchestrator | 2026-03-30 00:58:42.051032 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-30 00:58:42.051035 | orchestrator | Monday 30 March 2026 00:56:32 +0000 (0:00:00.542) 0:00:25.680 ********** 2026-03-30 00:58:42.051082 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 00:58:42.051088 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-30 00:58:42.051092 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-30 00:58:42.051096 | orchestrator | 2026-03-30 00:58:42.051101 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-30 00:58:42.051105 | orchestrator | Monday 30 March 2026 00:56:34 +0000 (0:00:01.215) 0:00:26.896 ********** 2026-03-30 00:58:42.051109 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.051117 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.051121 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.051125 | orchestrator | 2026-03-30 00:58:42.051129 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-30 00:58:42.051133 | orchestrator | Monday 30 March 2026 00:56:34 +0000 (0:00:00.484) 0:00:27.380 ********** 2026-03-30 00:58:42.051137 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-30 00:58:42.051141 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-30 00:58:42.051145 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-30 00:58:42.051149 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-30 00:58:42.051153 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-30 00:58:42.051160 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-30 00:58:42.051164 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-30 00:58:42.051168 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-30 00:58:42.051172 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-30 00:58:42.051176 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-30 00:58:42.051179 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-30 00:58:42.051183 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-30 00:58:42.051187 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-30 00:58:42.051191 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-30 00:58:42.051195 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-30 00:58:42.051199 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-30 00:58:42.051203 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-30 00:58:42.051206 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-30 00:58:42.051210 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-30 00:58:42.051214 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-30 00:58:42.051218 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-30 00:58:42.051222 | orchestrator | 2026-03-30 00:58:42.051226 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-30 00:58:42.051230 | orchestrator | Monday 30 March 2026 00:56:43 +0000 (0:00:09.056) 0:00:36.436 ********** 2026-03-30 00:58:42.051234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-30 00:58:42.051237 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-30 00:58:42.051246 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-30 00:58:42.051250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-30 00:58:42.051254 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-30 00:58:42.051257 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-30 00:58:42.051261 | orchestrator | 2026-03-30 00:58:42.051265 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-30 00:58:42.051269 | orchestrator | Monday 30 March 2026 00:56:45 +0000 (0:00:02.366) 0:00:38.803 ********** 2026-03-30 00:58:42.051277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.051284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.051289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-30 00:58:42.051295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.051308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.051314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-30 00:58:42.051325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.051336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.051380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-30 00:58:42.051387 | orchestrator | 2026-03-30 00:58:42.051391 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-30 00:58:42.051396 | orchestrator | Monday 30 March 2026 00:56:48 +0000 (0:00:02.524) 0:00:41.327 ********** 2026-03-30 00:58:42.051400 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051404 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.051407 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.051411 | orchestrator | 2026-03-30 00:58:42.051420 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-30 00:58:42.051425 | orchestrator | Monday 30 March 2026 00:56:48 +0000 (0:00:00.467) 0:00:41.795 ********** 2026-03-30 00:58:42.051429 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051433 | orchestrator | 2026-03-30 00:58:42.051436 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-30 00:58:42.051440 | orchestrator | Monday 30 March 2026 00:56:51 +0000 (0:00:02.316) 0:00:44.111 ********** 2026-03-30 00:58:42.051445 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051448 | orchestrator | 2026-03-30 00:58:42.051452 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-30 00:58:42.051457 | orchestrator | Monday 30 March 2026 00:56:53 +0000 (0:00:02.132) 0:00:46.244 ********** 2026-03-30 00:58:42.051460 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.051464 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.051468 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.051472 | orchestrator | 2026-03-30 00:58:42.051476 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-30 00:58:42.051480 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:00.856) 0:00:47.101 ********** 2026-03-30 00:58:42.051484 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.051491 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.051497 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.051504 | orchestrator | 2026-03-30 00:58:42.051511 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-30 00:58:42.051517 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:00.289) 0:00:47.390 ********** 2026-03-30 00:58:42.051524 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051530 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.051536 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.051542 | orchestrator | 2026-03-30 00:58:42.051548 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-30 00:58:42.051556 | orchestrator | Monday 30 March 2026 00:56:54 +0000 (0:00:00.384) 0:00:47.775 ********** 2026-03-30 00:58:42.051560 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051566 | orchestrator | 2026-03-30 00:58:42.051572 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-30 00:58:42.051579 | orchestrator | Monday 30 March 2026 00:57:08 +0000 (0:00:14.057) 0:01:01.832 ********** 2026-03-30 00:58:42.051584 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051588 | orchestrator | 2026-03-30 00:58:42.051592 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-30 00:58:42.051598 | orchestrator | Monday 30 March 2026 00:57:20 +0000 (0:00:11.206) 0:01:13.039 ********** 2026-03-30 00:58:42.051604 | orchestrator | 2026-03-30 00:58:42.051614 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-30 00:58:42.051623 | orchestrator | Monday 30 March 2026 00:57:20 +0000 (0:00:00.063) 0:01:13.102 ********** 2026-03-30 00:58:42.051629 | orchestrator | 2026-03-30 00:58:42.051635 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-30 00:58:42.051641 | orchestrator | Monday 30 March 2026 00:57:20 +0000 (0:00:00.066) 0:01:13.169 ********** 2026-03-30 00:58:42.051648 | orchestrator | 2026-03-30 00:58:42.051660 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-30 00:58:42.051666 | orchestrator | Monday 30 March 2026 00:57:20 +0000 (0:00:00.097) 0:01:13.267 ********** 2026-03-30 00:58:42.051673 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051703 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:58:42.051711 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:58:42.051718 | orchestrator | 2026-03-30 00:58:42.051725 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-30 00:58:42.051732 | orchestrator | Monday 30 March 2026 00:57:33 +0000 (0:00:12.686) 0:01:25.953 ********** 2026-03-30 00:58:42.051738 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051744 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:58:42.051755 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:58:42.051760 | orchestrator | 2026-03-30 00:58:42.051764 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-30 00:58:42.051768 | orchestrator | Monday 30 March 2026 00:57:37 +0000 (0:00:04.778) 0:01:30.732 ********** 2026-03-30 00:58:42.051772 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051780 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:58:42.051784 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:58:42.051788 | orchestrator | 2026-03-30 00:58:42.051792 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-30 00:58:42.051796 | orchestrator | Monday 30 March 2026 00:57:49 +0000 (0:00:11.216) 0:01:41.949 ********** 2026-03-30 00:58:42.051800 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 00:58:42.051804 | orchestrator | 2026-03-30 00:58:42.051808 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-30 00:58:42.051812 | orchestrator | Monday 30 March 2026 00:57:49 +0000 (0:00:00.517) 0:01:42.466 ********** 2026-03-30 00:58:42.051816 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.051820 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:58:42.051824 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:58:42.051828 | orchestrator | 2026-03-30 00:58:42.051832 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-30 00:58:42.051835 | orchestrator | Monday 30 March 2026 00:57:50 +0000 (0:00:00.823) 0:01:43.290 ********** 2026-03-30 00:58:42.051839 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:58:42.051843 | orchestrator | 2026-03-30 00:58:42.051847 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-30 00:58:42.051850 | orchestrator | Monday 30 March 2026 00:57:52 +0000 (0:00:01.669) 0:01:44.960 ********** 2026-03-30 00:58:42.051855 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-30 00:58:42.051859 | orchestrator | 2026-03-30 00:58:42.051863 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-30 00:58:42.051867 | orchestrator | Monday 30 March 2026 00:58:04 +0000 (0:00:12.839) 0:01:57.799 ********** 2026-03-30 00:58:42.051871 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-30 00:58:42.051874 | orchestrator | 2026-03-30 00:58:42.051879 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-30 00:58:42.051883 | orchestrator | Monday 30 March 2026 00:58:24 +0000 (0:00:19.786) 0:02:17.585 ********** 2026-03-30 00:58:42.051887 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-30 00:58:42.051891 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-30 00:58:42.051894 | orchestrator | 2026-03-30 00:58:42.051898 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-30 00:58:42.051902 | orchestrator | Monday 30 March 2026 00:58:33 +0000 (0:00:08.396) 0:02:25.981 ********** 2026-03-30 00:58:42.051906 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051910 | orchestrator | 2026-03-30 00:58:42.051914 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-30 00:58:42.051918 | orchestrator | Monday 30 March 2026 00:58:33 +0000 (0:00:00.120) 0:02:26.102 ********** 2026-03-30 00:58:42.051922 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051926 | orchestrator | 2026-03-30 00:58:42.051930 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-30 00:58:42.051934 | orchestrator | Monday 30 March 2026 00:58:33 +0000 (0:00:00.130) 0:02:26.232 ********** 2026-03-30 00:58:42.051938 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051942 | orchestrator | 2026-03-30 00:58:42.051946 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-30 00:58:42.051949 | orchestrator | Monday 30 March 2026 00:58:33 +0000 (0:00:00.121) 0:02:26.353 ********** 2026-03-30 00:58:42.051961 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.051965 | orchestrator | 2026-03-30 00:58:42.051969 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-30 00:58:42.051975 | orchestrator | Monday 30 March 2026 00:58:33 +0000 (0:00:00.309) 0:02:26.663 ********** 2026-03-30 00:58:42.051981 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:58:42.051987 | orchestrator | 2026-03-30 00:58:42.051992 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-30 00:58:42.051998 | orchestrator | Monday 30 March 2026 00:58:37 +0000 (0:00:03.896) 0:02:30.560 ********** 2026-03-30 00:58:42.052003 | orchestrator | skipping: [testbed-node-0] 2026-03-30 00:58:42.052009 | orchestrator | skipping: [testbed-node-1] 2026-03-30 00:58:42.052015 | orchestrator | skipping: [testbed-node-2] 2026-03-30 00:58:42.052022 | orchestrator | 2026-03-30 00:58:42.052029 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:58:42.052034 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-30 00:58:42.052062 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:58:42.052073 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 00:58:42.052078 | orchestrator | 2026-03-30 00:58:42.052082 | orchestrator | 2026-03-30 00:58:42.052086 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:58:42.052090 | orchestrator | Monday 30 March 2026 00:58:38 +0000 (0:00:01.066) 0:02:31.626 ********** 2026-03-30 00:58:42.052093 | orchestrator | =============================================================================== 2026-03-30 00:58:42.052097 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.79s 2026-03-30 00:58:42.052101 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.06s 2026-03-30 00:58:42.052105 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.84s 2026-03-30 00:58:42.052109 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 12.69s 2026-03-30 00:58:42.052116 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.22s 2026-03-30 00:58:42.052120 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.21s 2026-03-30 00:58:42.052125 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.06s 2026-03-30 00:58:42.052129 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.40s 2026-03-30 00:58:42.052133 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.95s 2026-03-30 00:58:42.052136 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.78s 2026-03-30 00:58:42.052140 | orchestrator | keystone : Creating default user role ----------------------------------- 3.90s 2026-03-30 00:58:42.052144 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.32s 2026-03-30 00:58:42.052148 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.21s 2026-03-30 00:58:42.052152 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.52s 2026-03-30 00:58:42.052156 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.37s 2026-03-30 00:58:42.052160 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.32s 2026-03-30 00:58:42.052164 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.13s 2026-03-30 00:58:42.052168 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.06s 2026-03-30 00:58:42.052172 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.76s 2026-03-30 00:58:42.052176 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.67s 2026-03-30 00:58:42.052184 | orchestrator | 2026-03-30 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:45.041777 | orchestrator | 2026-03-30 00:58:45 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:58:45.042191 | orchestrator | 2026-03-30 00:58:45 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:45.042851 | orchestrator | 2026-03-30 00:58:45 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:45.043924 | orchestrator | 2026-03-30 00:58:45 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:45.044427 | orchestrator | 2026-03-30 00:58:45 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:45.044461 | orchestrator | 2026-03-30 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:48.076724 | orchestrator | 2026-03-30 00:58:48 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:58:48.078905 | orchestrator | 2026-03-30 00:58:48 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:48.080901 | orchestrator | 2026-03-30 00:58:48 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:48.082629 | orchestrator | 2026-03-30 00:58:48 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:48.083996 | orchestrator | 2026-03-30 00:58:48 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:48.084370 | orchestrator | 2026-03-30 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:51.129657 | orchestrator | 2026-03-30 00:58:51 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:58:51.132541 | orchestrator | 2026-03-30 00:58:51 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:51.135431 | orchestrator | 2026-03-30 00:58:51 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:51.137673 | orchestrator | 2026-03-30 00:58:51 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:51.139884 | orchestrator | 2026-03-30 00:58:51 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:51.140194 | orchestrator | 2026-03-30 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:54.186684 | orchestrator | 2026-03-30 00:58:54 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:58:54.188756 | orchestrator | 2026-03-30 00:58:54 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:54.190295 | orchestrator | 2026-03-30 00:58:54 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:54.191841 | orchestrator | 2026-03-30 00:58:54 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:54.193072 | orchestrator | 2026-03-30 00:58:54 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:54.193304 | orchestrator | 2026-03-30 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:58:57.234297 | orchestrator | 2026-03-30 00:58:57 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:58:57.236900 | orchestrator | 2026-03-30 00:58:57 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state STARTED 2026-03-30 00:58:57.237859 | orchestrator | 2026-03-30 00:58:57 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:58:57.238497 | orchestrator | 2026-03-30 00:58:57 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:58:57.239242 | orchestrator | 2026-03-30 00:58:57 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:58:57.239257 | orchestrator | 2026-03-30 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:00.285203 | orchestrator | 2026-03-30 00:59:00 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:00.285275 | orchestrator | 2026-03-30 00:59:00 | INFO  | Task af2b841b-c45c-4c64-9a17-0f3080be0e8d is in state SUCCESS 2026-03-30 00:59:00.285705 | orchestrator | 2026-03-30 00:59:00 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:00.286677 | orchestrator | 2026-03-30 00:59:00 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:00.288669 | orchestrator | 2026-03-30 00:59:00 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state STARTED 2026-03-30 00:59:00.288721 | orchestrator | 2026-03-30 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:03.325549 | orchestrator | 2026-03-30 00:59:03 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:03.325763 | orchestrator | 2026-03-30 00:59:03 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:03.326452 | orchestrator | 2026-03-30 00:59:03 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:03.326860 | orchestrator | 2026-03-30 00:59:03 | INFO  | Task 4395292c-8075-42e7-9ac8-cb709be881ee is in state SUCCESS 2026-03-30 00:59:03.327376 | orchestrator | 2026-03-30 00:59:03.327402 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-30 00:59:03.327409 | orchestrator | 2.16.14 2026-03-30 00:59:03.327417 | orchestrator | 2026-03-30 00:59:03.327423 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-30 00:59:03.327431 | orchestrator | 2026-03-30 00:59:03.327437 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-30 00:59:03.327445 | orchestrator | Monday 30 March 2026 00:58:12 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-30 00:59:03.327452 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327460 | orchestrator | 2026-03-30 00:59:03.327467 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-30 00:59:03.327473 | orchestrator | Monday 30 March 2026 00:58:14 +0000 (0:00:01.491) 0:00:01.663 ********** 2026-03-30 00:59:03.327480 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327486 | orchestrator | 2026-03-30 00:59:03.327492 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-30 00:59:03.327499 | orchestrator | Monday 30 March 2026 00:58:15 +0000 (0:00:00.819) 0:00:02.483 ********** 2026-03-30 00:59:03.327505 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327551 | orchestrator | 2026-03-30 00:59:03.327559 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-30 00:59:03.327565 | orchestrator | Monday 30 March 2026 00:58:15 +0000 (0:00:00.952) 0:00:03.435 ********** 2026-03-30 00:59:03.327571 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327578 | orchestrator | 2026-03-30 00:59:03.327584 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-30 00:59:03.327590 | orchestrator | Monday 30 March 2026 00:58:17 +0000 (0:00:01.396) 0:00:04.832 ********** 2026-03-30 00:59:03.327596 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327603 | orchestrator | 2026-03-30 00:59:03.327609 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-30 00:59:03.327658 | orchestrator | Monday 30 March 2026 00:58:18 +0000 (0:00:00.893) 0:00:05.725 ********** 2026-03-30 00:59:03.327665 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327695 | orchestrator | 2026-03-30 00:59:03.327702 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-30 00:59:03.327708 | orchestrator | Monday 30 March 2026 00:58:19 +0000 (0:00:00.789) 0:00:06.515 ********** 2026-03-30 00:59:03.327715 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327721 | orchestrator | 2026-03-30 00:59:03.327727 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-30 00:59:03.327733 | orchestrator | Monday 30 March 2026 00:58:20 +0000 (0:00:01.123) 0:00:07.639 ********** 2026-03-30 00:59:03.327739 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327746 | orchestrator | 2026-03-30 00:59:03.327752 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-30 00:59:03.327758 | orchestrator | Monday 30 March 2026 00:58:21 +0000 (0:00:01.009) 0:00:08.648 ********** 2026-03-30 00:59:03.327764 | orchestrator | changed: [testbed-manager] 2026-03-30 00:59:03.327771 | orchestrator | 2026-03-30 00:59:03.327777 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-30 00:59:03.327783 | orchestrator | Monday 30 March 2026 00:58:34 +0000 (0:00:12.853) 0:00:21.501 ********** 2026-03-30 00:59:03.327790 | orchestrator | skipping: [testbed-manager] 2026-03-30 00:59:03.327796 | orchestrator | 2026-03-30 00:59:03.327815 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-30 00:59:03.327821 | orchestrator | 2026-03-30 00:59:03.327827 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-30 00:59:03.327834 | orchestrator | Monday 30 March 2026 00:58:34 +0000 (0:00:00.151) 0:00:21.653 ********** 2026-03-30 00:59:03.327840 | orchestrator | changed: [testbed-node-0] 2026-03-30 00:59:03.327846 | orchestrator | 2026-03-30 00:59:03.327852 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-30 00:59:03.327859 | orchestrator | 2026-03-30 00:59:03.327865 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-30 00:59:03.327871 | orchestrator | Monday 30 March 2026 00:58:46 +0000 (0:00:12.088) 0:00:33.742 ********** 2026-03-30 00:59:03.327878 | orchestrator | changed: [testbed-node-1] 2026-03-30 00:59:03.327884 | orchestrator | 2026-03-30 00:59:03.327890 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-30 00:59:03.327896 | orchestrator | 2026-03-30 00:59:03.327902 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-30 00:59:03.327909 | orchestrator | Monday 30 March 2026 00:58:57 +0000 (0:00:11.586) 0:00:45.328 ********** 2026-03-30 00:59:03.327915 | orchestrator | changed: [testbed-node-2] 2026-03-30 00:59:03.327921 | orchestrator | 2026-03-30 00:59:03.327927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:59:03.327934 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-30 00:59:03.327942 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.327949 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.327954 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.327960 | orchestrator | 2026-03-30 00:59:03.327966 | orchestrator | 2026-03-30 00:59:03.327971 | orchestrator | 2026-03-30 00:59:03.327977 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:59:03.327983 | orchestrator | Monday 30 March 2026 00:58:59 +0000 (0:00:01.419) 0:00:46.748 ********** 2026-03-30 00:59:03.327990 | orchestrator | =============================================================================== 2026-03-30 00:59:03.328074 | orchestrator | Restart ceph manager service ------------------------------------------- 25.09s 2026-03-30 00:59:03.328090 | orchestrator | Create admin user ------------------------------------------------------ 12.85s 2026-03-30 00:59:03.328101 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.49s 2026-03-30 00:59:03.328105 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.40s 2026-03-30 00:59:03.328109 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.12s 2026-03-30 00:59:03.328113 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.01s 2026-03-30 00:59:03.328117 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.95s 2026-03-30 00:59:03.328121 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.89s 2026-03-30 00:59:03.328124 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.82s 2026-03-30 00:59:03.328128 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.79s 2026-03-30 00:59:03.328132 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-03-30 00:59:03.328136 | orchestrator | 2026-03-30 00:59:03.328139 | orchestrator | 2026-03-30 00:59:03.328143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 00:59:03.328147 | orchestrator | 2026-03-30 00:59:03.328151 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 00:59:03.328155 | orchestrator | Monday 30 March 2026 00:58:18 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-03-30 00:59:03.328158 | orchestrator | ok: [testbed-node-0] 2026-03-30 00:59:03.328163 | orchestrator | ok: [testbed-node-1] 2026-03-30 00:59:03.328167 | orchestrator | ok: [testbed-node-2] 2026-03-30 00:59:03.328173 | orchestrator | ok: [testbed-node-3] 2026-03-30 00:59:03.328179 | orchestrator | ok: [testbed-node-4] 2026-03-30 00:59:03.328185 | orchestrator | ok: [testbed-node-5] 2026-03-30 00:59:03.328191 | orchestrator | ok: [testbed-manager] 2026-03-30 00:59:03.328196 | orchestrator | 2026-03-30 00:59:03.328202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 00:59:03.328209 | orchestrator | Monday 30 March 2026 00:58:18 +0000 (0:00:00.673) 0:00:00.964 ********** 2026-03-30 00:59:03.328215 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328222 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328228 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328234 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328241 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328246 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328252 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-30 00:59:03.328260 | orchestrator | 2026-03-30 00:59:03.328264 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-30 00:59:03.328267 | orchestrator | 2026-03-30 00:59:03.328271 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-30 00:59:03.328275 | orchestrator | Monday 30 March 2026 00:58:19 +0000 (0:00:00.736) 0:00:01.701 ********** 2026-03-30 00:59:03.328284 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-30 00:59:03.328289 | orchestrator | 2026-03-30 00:59:03.328293 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-30 00:59:03.328297 | orchestrator | Monday 30 March 2026 00:58:21 +0000 (0:00:01.825) 0:00:03.526 ********** 2026-03-30 00:59:03.328302 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-30 00:59:03.328306 | orchestrator | 2026-03-30 00:59:03.328311 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-30 00:59:03.328315 | orchestrator | Monday 30 March 2026 00:58:30 +0000 (0:00:09.186) 0:00:12.712 ********** 2026-03-30 00:59:03.328320 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-30 00:59:03.328334 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-30 00:59:03.328338 | orchestrator | 2026-03-30 00:59:03.328343 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-30 00:59:03.328347 | orchestrator | Monday 30 March 2026 00:58:39 +0000 (0:00:08.698) 0:00:21.411 ********** 2026-03-30 00:59:03.328352 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-30 00:59:03.328356 | orchestrator | 2026-03-30 00:59:03.328361 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-30 00:59:03.328365 | orchestrator | Monday 30 March 2026 00:58:42 +0000 (0:00:03.776) 0:00:25.187 ********** 2026-03-30 00:59:03.328369 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-30 00:59:03.328374 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 00:59:03.328378 | orchestrator | 2026-03-30 00:59:03.328382 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-30 00:59:03.328387 | orchestrator | Monday 30 March 2026 00:58:47 +0000 (0:00:04.705) 0:00:29.893 ********** 2026-03-30 00:59:03.328391 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 00:59:03.328396 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-30 00:59:03.328400 | orchestrator | 2026-03-30 00:59:03.328405 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-30 00:59:03.328409 | orchestrator | Monday 30 March 2026 00:58:54 +0000 (0:00:07.113) 0:00:37.006 ********** 2026-03-30 00:59:03.328413 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-30 00:59:03.328418 | orchestrator | 2026-03-30 00:59:03.328422 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 00:59:03.328430 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328435 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328440 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328444 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328448 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328453 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328457 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 00:59:03.328462 | orchestrator | 2026-03-30 00:59:03.328466 | orchestrator | 2026-03-30 00:59:03.328471 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 00:59:03.328476 | orchestrator | Monday 30 March 2026 00:59:01 +0000 (0:00:06.564) 0:00:43.570 ********** 2026-03-30 00:59:03.328482 | orchestrator | =============================================================================== 2026-03-30 00:59:03.328489 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 9.19s 2026-03-30 00:59:03.328495 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.70s 2026-03-30 00:59:03.328501 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.11s 2026-03-30 00:59:03.328507 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.56s 2026-03-30 00:59:03.328513 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.71s 2026-03-30 00:59:03.328519 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.78s 2026-03-30 00:59:03.328530 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.83s 2026-03-30 00:59:03.328536 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-03-30 00:59:03.328542 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2026-03-30 00:59:03.328549 | orchestrator | 2026-03-30 00:59:03 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:03.328555 | orchestrator | 2026-03-30 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:06.350723 | orchestrator | 2026-03-30 00:59:06 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:06.350822 | orchestrator | 2026-03-30 00:59:06 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:06.352034 | orchestrator | 2026-03-30 00:59:06 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:06.352431 | orchestrator | 2026-03-30 00:59:06 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:06.352590 | orchestrator | 2026-03-30 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:09.386115 | orchestrator | 2026-03-30 00:59:09 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:09.386216 | orchestrator | 2026-03-30 00:59:09 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:09.386726 | orchestrator | 2026-03-30 00:59:09 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:09.387547 | orchestrator | 2026-03-30 00:59:09 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:09.387579 | orchestrator | 2026-03-30 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:12.420604 | orchestrator | 2026-03-30 00:59:12 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:12.422860 | orchestrator | 2026-03-30 00:59:12 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:12.423255 | orchestrator | 2026-03-30 00:59:12 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:12.424185 | orchestrator | 2026-03-30 00:59:12 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:12.424506 | orchestrator | 2026-03-30 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:15.459302 | orchestrator | 2026-03-30 00:59:15 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:15.459788 | orchestrator | 2026-03-30 00:59:15 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:15.460501 | orchestrator | 2026-03-30 00:59:15 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:15.461208 | orchestrator | 2026-03-30 00:59:15 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:15.461289 | orchestrator | 2026-03-30 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:18.489006 | orchestrator | 2026-03-30 00:59:18 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:18.489790 | orchestrator | 2026-03-30 00:59:18 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:18.490586 | orchestrator | 2026-03-30 00:59:18 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:18.491313 | orchestrator | 2026-03-30 00:59:18 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:18.491391 | orchestrator | 2026-03-30 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:21.528316 | orchestrator | 2026-03-30 00:59:21 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:21.530194 | orchestrator | 2026-03-30 00:59:21 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:21.532383 | orchestrator | 2026-03-30 00:59:21 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:21.532885 | orchestrator | 2026-03-30 00:59:21 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:21.533017 | orchestrator | 2026-03-30 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:24.571491 | orchestrator | 2026-03-30 00:59:24 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:24.573130 | orchestrator | 2026-03-30 00:59:24 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:24.573873 | orchestrator | 2026-03-30 00:59:24 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:24.575018 | orchestrator | 2026-03-30 00:59:24 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:24.575077 | orchestrator | 2026-03-30 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:27.603064 | orchestrator | 2026-03-30 00:59:27 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:27.603727 | orchestrator | 2026-03-30 00:59:27 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:27.604412 | orchestrator | 2026-03-30 00:59:27 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:27.605223 | orchestrator | 2026-03-30 00:59:27 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:27.605257 | orchestrator | 2026-03-30 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:30.632391 | orchestrator | 2026-03-30 00:59:30 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:30.632584 | orchestrator | 2026-03-30 00:59:30 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:30.633690 | orchestrator | 2026-03-30 00:59:30 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:30.634177 | orchestrator | 2026-03-30 00:59:30 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:30.634307 | orchestrator | 2026-03-30 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:33.664764 | orchestrator | 2026-03-30 00:59:33 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:33.665107 | orchestrator | 2026-03-30 00:59:33 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:33.665782 | orchestrator | 2026-03-30 00:59:33 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:33.666674 | orchestrator | 2026-03-30 00:59:33 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:33.666706 | orchestrator | 2026-03-30 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:36.687383 | orchestrator | 2026-03-30 00:59:36 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:36.687749 | orchestrator | 2026-03-30 00:59:36 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:36.688368 | orchestrator | 2026-03-30 00:59:36 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:36.690375 | orchestrator | 2026-03-30 00:59:36 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:36.690419 | orchestrator | 2026-03-30 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:39.722460 | orchestrator | 2026-03-30 00:59:39 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:39.724062 | orchestrator | 2026-03-30 00:59:39 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:39.725153 | orchestrator | 2026-03-30 00:59:39 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:39.726754 | orchestrator | 2026-03-30 00:59:39 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:39.726809 | orchestrator | 2026-03-30 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:42.752695 | orchestrator | 2026-03-30 00:59:42 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:42.753786 | orchestrator | 2026-03-30 00:59:42 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:42.755827 | orchestrator | 2026-03-30 00:59:42 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:42.757429 | orchestrator | 2026-03-30 00:59:42 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:42.757663 | orchestrator | 2026-03-30 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:45.780776 | orchestrator | 2026-03-30 00:59:45 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:45.781261 | orchestrator | 2026-03-30 00:59:45 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:45.782255 | orchestrator | 2026-03-30 00:59:45 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:45.782676 | orchestrator | 2026-03-30 00:59:45 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:45.782700 | orchestrator | 2026-03-30 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:48.808936 | orchestrator | 2026-03-30 00:59:48 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:48.809558 | orchestrator | 2026-03-30 00:59:48 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:48.809684 | orchestrator | 2026-03-30 00:59:48 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:48.810377 | orchestrator | 2026-03-30 00:59:48 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:48.810394 | orchestrator | 2026-03-30 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:51.835924 | orchestrator | 2026-03-30 00:59:51 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:51.836813 | orchestrator | 2026-03-30 00:59:51 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:51.837830 | orchestrator | 2026-03-30 00:59:51 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:51.839640 | orchestrator | 2026-03-30 00:59:51 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:51.839704 | orchestrator | 2026-03-30 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:54.876734 | orchestrator | 2026-03-30 00:59:54 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:54.877352 | orchestrator | 2026-03-30 00:59:54 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:54.878233 | orchestrator | 2026-03-30 00:59:54 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:54.879123 | orchestrator | 2026-03-30 00:59:54 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:54.879172 | orchestrator | 2026-03-30 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 00:59:57.931179 | orchestrator | 2026-03-30 00:59:57 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 00:59:57.931257 | orchestrator | 2026-03-30 00:59:57 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 00:59:57.932134 | orchestrator | 2026-03-30 00:59:57 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 00:59:57.932862 | orchestrator | 2026-03-30 00:59:57 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 00:59:57.932936 | orchestrator | 2026-03-30 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:00.974852 | orchestrator | 2026-03-30 01:00:00 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:00.975206 | orchestrator | 2026-03-30 01:00:00 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:00.976268 | orchestrator | 2026-03-30 01:00:00 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:00.978646 | orchestrator | 2026-03-30 01:00:00 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:00.978703 | orchestrator | 2026-03-30 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:04.006840 | orchestrator | 2026-03-30 01:00:04 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:04.006982 | orchestrator | 2026-03-30 01:00:04 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:04.007681 | orchestrator | 2026-03-30 01:00:04 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:04.008412 | orchestrator | 2026-03-30 01:00:04 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:04.008459 | orchestrator | 2026-03-30 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:07.041081 | orchestrator | 2026-03-30 01:00:07 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:07.041131 | orchestrator | 2026-03-30 01:00:07 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:07.042198 | orchestrator | 2026-03-30 01:00:07 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:07.043831 | orchestrator | 2026-03-30 01:00:07 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:07.043907 | orchestrator | 2026-03-30 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:10.079016 | orchestrator | 2026-03-30 01:00:10 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:10.079132 | orchestrator | 2026-03-30 01:00:10 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:10.079702 | orchestrator | 2026-03-30 01:00:10 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:10.080258 | orchestrator | 2026-03-30 01:00:10 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:10.080319 | orchestrator | 2026-03-30 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:13.120187 | orchestrator | 2026-03-30 01:00:13 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:13.122340 | orchestrator | 2026-03-30 01:00:13 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:13.124541 | orchestrator | 2026-03-30 01:00:13 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:13.126834 | orchestrator | 2026-03-30 01:00:13 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:13.126894 | orchestrator | 2026-03-30 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:16.166876 | orchestrator | 2026-03-30 01:00:16 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:16.167515 | orchestrator | 2026-03-30 01:00:16 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:16.168373 | orchestrator | 2026-03-30 01:00:16 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:16.169508 | orchestrator | 2026-03-30 01:00:16 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:16.169844 | orchestrator | 2026-03-30 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:19.216487 | orchestrator | 2026-03-30 01:00:19 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:19.218667 | orchestrator | 2026-03-30 01:00:19 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:19.220732 | orchestrator | 2026-03-30 01:00:19 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:19.222699 | orchestrator | 2026-03-30 01:00:19 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:19.222747 | orchestrator | 2026-03-30 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:22.278338 | orchestrator | 2026-03-30 01:00:22 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:22.279091 | orchestrator | 2026-03-30 01:00:22 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:22.280382 | orchestrator | 2026-03-30 01:00:22 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:22.281765 | orchestrator | 2026-03-30 01:00:22 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:22.281796 | orchestrator | 2026-03-30 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:25.325869 | orchestrator | 2026-03-30 01:00:25 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:25.327735 | orchestrator | 2026-03-30 01:00:25 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:25.329712 | orchestrator | 2026-03-30 01:00:25 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:25.331514 | orchestrator | 2026-03-30 01:00:25 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:25.331617 | orchestrator | 2026-03-30 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:28.377300 | orchestrator | 2026-03-30 01:00:28 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:28.378219 | orchestrator | 2026-03-30 01:00:28 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:28.381309 | orchestrator | 2026-03-30 01:00:28 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:28.381398 | orchestrator | 2026-03-30 01:00:28 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:28.381410 | orchestrator | 2026-03-30 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:31.415936 | orchestrator | 2026-03-30 01:00:31 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:31.417815 | orchestrator | 2026-03-30 01:00:31 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:31.419999 | orchestrator | 2026-03-30 01:00:31 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:31.422046 | orchestrator | 2026-03-30 01:00:31 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:31.422075 | orchestrator | 2026-03-30 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:34.465349 | orchestrator | 2026-03-30 01:00:34 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:34.467147 | orchestrator | 2026-03-30 01:00:34 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:34.468445 | orchestrator | 2026-03-30 01:00:34 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:34.469981 | orchestrator | 2026-03-30 01:00:34 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:34.470115 | orchestrator | 2026-03-30 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:37.516971 | orchestrator | 2026-03-30 01:00:37 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:37.517039 | orchestrator | 2026-03-30 01:00:37 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:37.517510 | orchestrator | 2026-03-30 01:00:37 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:37.519496 | orchestrator | 2026-03-30 01:00:37 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:37.519547 | orchestrator | 2026-03-30 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:40.558644 | orchestrator | 2026-03-30 01:00:40 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:40.560058 | orchestrator | 2026-03-30 01:00:40 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:40.561834 | orchestrator | 2026-03-30 01:00:40 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:40.563561 | orchestrator | 2026-03-30 01:00:40 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:40.563603 | orchestrator | 2026-03-30 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:43.607381 | orchestrator | 2026-03-30 01:00:43 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:43.607425 | orchestrator | 2026-03-30 01:00:43 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:43.607430 | orchestrator | 2026-03-30 01:00:43 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:43.607435 | orchestrator | 2026-03-30 01:00:43 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:43.607439 | orchestrator | 2026-03-30 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:46.658128 | orchestrator | 2026-03-30 01:00:46 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:46.683165 | orchestrator | 2026-03-30 01:00:46 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:46.683201 | orchestrator | 2026-03-30 01:00:46 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:46.683207 | orchestrator | 2026-03-30 01:00:46 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:46.683212 | orchestrator | 2026-03-30 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:49.699471 | orchestrator | 2026-03-30 01:00:49 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:49.699888 | orchestrator | 2026-03-30 01:00:49 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:49.700534 | orchestrator | 2026-03-30 01:00:49 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:49.702621 | orchestrator | 2026-03-30 01:00:49 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:49.702662 | orchestrator | 2026-03-30 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:52.756409 | orchestrator | 2026-03-30 01:00:52 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:52.756625 | orchestrator | 2026-03-30 01:00:52 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:52.759431 | orchestrator | 2026-03-30 01:00:52 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:52.759839 | orchestrator | 2026-03-30 01:00:52 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:52.759861 | orchestrator | 2026-03-30 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:55.801341 | orchestrator | 2026-03-30 01:00:55 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:55.803334 | orchestrator | 2026-03-30 01:00:55 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:55.804275 | orchestrator | 2026-03-30 01:00:55 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:55.806240 | orchestrator | 2026-03-30 01:00:55 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:55.806680 | orchestrator | 2026-03-30 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:00:58.866326 | orchestrator | 2026-03-30 01:00:58 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:00:58.866415 | orchestrator | 2026-03-30 01:00:58 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:00:58.866424 | orchestrator | 2026-03-30 01:00:58 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:00:58.866432 | orchestrator | 2026-03-30 01:00:58 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:00:58.866439 | orchestrator | 2026-03-30 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:01.905718 | orchestrator | 2026-03-30 01:01:01 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:01.908717 | orchestrator | 2026-03-30 01:01:01 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:01.910774 | orchestrator | 2026-03-30 01:01:01 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:01.912228 | orchestrator | 2026-03-30 01:01:01 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:01.912297 | orchestrator | 2026-03-30 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:04.944962 | orchestrator | 2026-03-30 01:01:04 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:04.945853 | orchestrator | 2026-03-30 01:01:04 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:04.948729 | orchestrator | 2026-03-30 01:01:04 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:04.949432 | orchestrator | 2026-03-30 01:01:04 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:04.949470 | orchestrator | 2026-03-30 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:07.980296 | orchestrator | 2026-03-30 01:01:07 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:07.980356 | orchestrator | 2026-03-30 01:01:07 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:07.980362 | orchestrator | 2026-03-30 01:01:07 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:07.980367 | orchestrator | 2026-03-30 01:01:07 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:07.983089 | orchestrator | 2026-03-30 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:11.001363 | orchestrator | 2026-03-30 01:01:11 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:11.002068 | orchestrator | 2026-03-30 01:01:11 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:11.002647 | orchestrator | 2026-03-30 01:01:11 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:11.004442 | orchestrator | 2026-03-30 01:01:11 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:11.004481 | orchestrator | 2026-03-30 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:14.036557 | orchestrator | 2026-03-30 01:01:14 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:14.037698 | orchestrator | 2026-03-30 01:01:14 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:14.038720 | orchestrator | 2026-03-30 01:01:14 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:14.039839 | orchestrator | 2026-03-30 01:01:14 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:14.040586 | orchestrator | 2026-03-30 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:17.114262 | orchestrator | 2026-03-30 01:01:17 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:17.114321 | orchestrator | 2026-03-30 01:01:17 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:17.114330 | orchestrator | 2026-03-30 01:01:17 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:17.114338 | orchestrator | 2026-03-30 01:01:17 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:17.114345 | orchestrator | 2026-03-30 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:20.150862 | orchestrator | 2026-03-30 01:01:20 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:20.152386 | orchestrator | 2026-03-30 01:01:20 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:20.153943 | orchestrator | 2026-03-30 01:01:20 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state STARTED 2026-03-30 01:01:20.156063 | orchestrator | 2026-03-30 01:01:20 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:20.156113 | orchestrator | 2026-03-30 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:23.209131 | orchestrator | 2026-03-30 01:01:23 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:23.210760 | orchestrator | 2026-03-30 01:01:23 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:23.213011 | orchestrator | 2026-03-30 01:01:23 | INFO  | Task 4f09c503-262b-40aa-bacc-2c2ed9c03155 is in state SUCCESS 2026-03-30 01:01:23.214809 | orchestrator | 2026-03-30 01:01:23.214858 | orchestrator | 2026-03-30 01:01:23.214867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:01:23.214875 | orchestrator | 2026-03-30 01:01:23.214883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:01:23.214890 | orchestrator | Monday 30 March 2026 00:58:17 +0000 (0:00:00.307) 0:00:00.307 ********** 2026-03-30 01:01:23.214897 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:01:23.214905 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:01:23.214912 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:01:23.214919 | orchestrator | 2026-03-30 01:01:23.214926 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:01:23.214933 | orchestrator | Monday 30 March 2026 00:58:17 +0000 (0:00:00.272) 0:00:00.579 ********** 2026-03-30 01:01:23.214940 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-30 01:01:23.214947 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-30 01:01:23.214955 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-30 01:01:23.214962 | orchestrator | 2026-03-30 01:01:23.214968 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-30 01:01:23.214974 | orchestrator | 2026-03-30 01:01:23.214981 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-30 01:01:23.214987 | orchestrator | Monday 30 March 2026 00:58:18 +0000 (0:00:00.316) 0:00:00.896 ********** 2026-03-30 01:01:23.214994 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:01:23.215001 | orchestrator | 2026-03-30 01:01:23.215007 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-30 01:01:23.215014 | orchestrator | Monday 30 March 2026 00:58:18 +0000 (0:00:00.575) 0:00:01.472 ********** 2026-03-30 01:01:23.215021 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-30 01:01:23.215028 | orchestrator | 2026-03-30 01:01:23.215035 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-30 01:01:23.215041 | orchestrator | Monday 30 March 2026 00:58:32 +0000 (0:00:13.882) 0:00:15.355 ********** 2026-03-30 01:01:23.215047 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-30 01:01:23.215053 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-30 01:01:23.215060 | orchestrator | 2026-03-30 01:01:23.215066 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-30 01:01:23.215073 | orchestrator | Monday 30 March 2026 00:58:40 +0000 (0:00:07.726) 0:00:23.081 ********** 2026-03-30 01:01:23.215093 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:01:23.215101 | orchestrator | 2026-03-30 01:01:23.215108 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-30 01:01:23.215114 | orchestrator | Monday 30 March 2026 00:58:44 +0000 (0:00:03.991) 0:00:27.072 ********** 2026-03-30 01:01:23.215121 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-30 01:01:23.215128 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:01:23.215136 | orchestrator | 2026-03-30 01:01:23.215143 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-30 01:01:23.215168 | orchestrator | Monday 30 March 2026 00:58:48 +0000 (0:00:04.262) 0:00:31.335 ********** 2026-03-30 01:01:23.215175 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:01:23.215182 | orchestrator | 2026-03-30 01:01:23.215189 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-30 01:01:23.215204 | orchestrator | Monday 30 March 2026 00:58:52 +0000 (0:00:03.547) 0:00:34.883 ********** 2026-03-30 01:01:23.215211 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-30 01:01:23.215217 | orchestrator | 2026-03-30 01:01:23.215224 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-30 01:01:23.215231 | orchestrator | Monday 30 March 2026 00:58:57 +0000 (0:00:04.903) 0:00:39.786 ********** 2026-03-30 01:01:23.215255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215291 | orchestrator | 2026-03-30 01:01:23.215297 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-30 01:01:23.215304 | orchestrator | Monday 30 March 2026 00:59:01 +0000 (0:00:04.362) 0:00:44.149 ********** 2026-03-30 01:01:23.215311 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:01:23.215317 | orchestrator | 2026-03-30 01:01:23.215324 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-30 01:01:23.215334 | orchestrator | Monday 30 March 2026 00:59:02 +0000 (0:00:00.671) 0:00:44.821 ********** 2026-03-30 01:01:23.215340 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.215347 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:23.215353 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:23.215360 | orchestrator | 2026-03-30 01:01:23.215366 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-30 01:01:23.215373 | orchestrator | Monday 30 March 2026 00:59:06 +0000 (0:00:04.153) 0:00:48.975 ********** 2026-03-30 01:01:23.215380 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:23.215387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:23.215394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:23.215401 | orchestrator | 2026-03-30 01:01:23.215408 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-30 01:01:23.215415 | orchestrator | Monday 30 March 2026 00:59:08 +0000 (0:00:01.721) 0:00:50.697 ********** 2026-03-30 01:01:23.215423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:23.215430 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:23.215437 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:23.215448 | orchestrator | 2026-03-30 01:01:23.215455 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-30 01:01:23.215461 | orchestrator | Monday 30 March 2026 00:59:09 +0000 (0:00:01.295) 0:00:51.992 ********** 2026-03-30 01:01:23.215469 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:01:23.215475 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:01:23.215482 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:01:23.215488 | orchestrator | 2026-03-30 01:01:23.215495 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-30 01:01:23.215502 | orchestrator | Monday 30 March 2026 00:59:10 +0000 (0:00:00.605) 0:00:52.597 ********** 2026-03-30 01:01:23.215509 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.215516 | orchestrator | 2026-03-30 01:01:23.215523 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-30 01:01:23.215530 | orchestrator | Monday 30 March 2026 00:59:10 +0000 (0:00:00.118) 0:00:52.715 ********** 2026-03-30 01:01:23.215537 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.215544 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.215550 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.215558 | orchestrator | 2026-03-30 01:01:23.215564 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-30 01:01:23.215571 | orchestrator | Monday 30 March 2026 00:59:10 +0000 (0:00:00.242) 0:00:52.958 ********** 2026-03-30 01:01:23.215577 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:01:23.215585 | orchestrator | 2026-03-30 01:01:23.215592 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-30 01:01:23.215599 | orchestrator | Monday 30 March 2026 00:59:10 +0000 (0:00:00.542) 0:00:53.500 ********** 2026-03-30 01:01:23.215612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215652 | orchestrator | 2026-03-30 01:01:23.215659 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-30 01:01:23.215666 | orchestrator | Monday 30 March 2026 00:59:14 +0000 (0:00:03.352) 0:00:56.853 ********** 2026-03-30 01:01:23.215680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 01:01:23.215692 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.215703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 01:01:23.215710 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.215736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 01:01:23.215747 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.215753 | orchestrator | 2026-03-30 01:01:23.215759 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-30 01:01:23.215765 | orchestrator | Monday 30 March 2026 00:59:16 +0000 (0:00:02.623) 0:00:59.477 ********** 2026-03-30 01:01:23.215772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 01:01:23.215780 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.215790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 01:01:23.215798 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.215809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-30 01:01:23.215821 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.215828 | orchestrator | 2026-03-30 01:01:23.215835 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-30 01:01:23.215842 | orchestrator | Monday 30 March 2026 00:59:21 +0000 (0:00:04.566) 0:01:04.043 ********** 2026-03-30 01:01:23.215848 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.215855 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.215862 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.215869 | orchestrator | 2026-03-30 01:01:23.215876 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-30 01:01:23.215883 | orchestrator | Monday 30 March 2026 00:59:26 +0000 (0:00:05.071) 0:01:09.115 ********** 2026-03-30 01:01:23.215897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.215932 | orchestrator | 2026-03-30 01:01:23.215939 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-30 01:01:23.215946 | orchestrator | Monday 30 March 2026 00:59:31 +0000 (0:00:04.616) 0:01:13.731 ********** 2026-03-30 01:01:23.215954 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:23.215961 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.215969 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:23.215976 | orchestrator | 2026-03-30 01:01:23.215983 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-30 01:01:23.215989 | orchestrator | Monday 30 March 2026 00:59:38 +0000 (0:00:07.012) 0:01:20.744 ********** 2026-03-30 01:01:23.216110 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216119 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216125 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216131 | orchestrator | 2026-03-30 01:01:23.216138 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-30 01:01:23.216144 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:03.239) 0:01:23.984 ********** 2026-03-30 01:01:23.216151 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216158 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216165 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216172 | orchestrator | 2026-03-30 01:01:23.216179 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-30 01:01:23.216187 | orchestrator | Monday 30 March 2026 00:59:44 +0000 (0:00:03.596) 0:01:27.581 ********** 2026-03-30 01:01:23.216194 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216201 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216213 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216220 | orchestrator | 2026-03-30 01:01:23.216227 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-30 01:01:23.216233 | orchestrator | Monday 30 March 2026 00:59:48 +0000 (0:00:03.189) 0:01:30.771 ********** 2026-03-30 01:01:23.216239 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216245 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216251 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216258 | orchestrator | 2026-03-30 01:01:23.216264 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-30 01:01:23.216271 | orchestrator | Monday 30 March 2026 00:59:52 +0000 (0:00:04.047) 0:01:34.818 ********** 2026-03-30 01:01:23.216278 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216285 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216292 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216300 | orchestrator | 2026-03-30 01:01:23.216306 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-30 01:01:23.216312 | orchestrator | Monday 30 March 2026 00:59:52 +0000 (0:00:00.589) 0:01:35.408 ********** 2026-03-30 01:01:23.216319 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-30 01:01:23.216326 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216333 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-30 01:01:23.216340 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216346 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-30 01:01:23.216353 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216360 | orchestrator | 2026-03-30 01:01:23.216366 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-30 01:01:23.216372 | orchestrator | Monday 30 March 2026 00:59:57 +0000 (0:00:04.202) 0:01:39.610 ********** 2026-03-30 01:01:23.216379 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216386 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216392 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216398 | orchestrator | 2026-03-30 01:01:23.216405 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-03-30 01:01:23.216411 | orchestrator | Monday 30 March 2026 01:00:00 +0000 (0:00:03.469) 0:01:43.079 ********** 2026-03-30 01:01:23.216417 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216425 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216431 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216438 | orchestrator | 2026-03-30 01:01:23.216443 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-30 01:01:23.216457 | orchestrator | Monday 30 March 2026 01:00:06 +0000 (0:00:06.403) 0:01:49.483 ********** 2026-03-30 01:01:23.216468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.216486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.216540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-30 01:01:23.216561 | orchestrator | 2026-03-30 01:01:23.216572 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-30 01:01:23.216577 | orchestrator | Monday 30 March 2026 01:00:11 +0000 (0:00:04.209) 0:01:53.693 ********** 2026-03-30 01:01:23.216583 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:23.216590 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:23.216596 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:23.216603 | orchestrator | 2026-03-30 01:01:23.216610 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-30 01:01:23.216616 | orchestrator | Monday 30 March 2026 01:00:11 +0000 (0:00:00.365) 0:01:54.059 ********** 2026-03-30 01:01:23.216623 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.216630 | orchestrator | 2026-03-30 01:01:23.216638 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-30 01:01:23.216643 | orchestrator | Monday 30 March 2026 01:00:13 +0000 (0:00:02.210) 0:01:56.269 ********** 2026-03-30 01:01:23.216647 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.216652 | orchestrator | 2026-03-30 01:01:23.216656 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-30 01:01:23.216661 | orchestrator | Monday 30 March 2026 01:00:15 +0000 (0:00:02.191) 0:01:58.461 ********** 2026-03-30 01:01:23.216665 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.216670 | orchestrator | 2026-03-30 01:01:23.216674 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-30 01:01:23.216679 | orchestrator | Monday 30 March 2026 01:00:17 +0000 (0:00:01.969) 0:02:00.430 ********** 2026-03-30 01:01:23.216683 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.216688 | orchestrator | 2026-03-30 01:01:23.216692 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-30 01:01:23.216697 | orchestrator | Monday 30 March 2026 01:00:48 +0000 (0:00:30.683) 0:02:31.114 ********** 2026-03-30 01:01:23.216701 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.216706 | orchestrator | 2026-03-30 01:01:23.216731 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-30 01:01:23.216738 | orchestrator | Monday 30 March 2026 01:00:50 +0000 (0:00:01.903) 0:02:33.018 ********** 2026-03-30 01:01:23.216743 | orchestrator | 2026-03-30 01:01:23.216747 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-30 01:01:23.216752 | orchestrator | Monday 30 March 2026 01:00:50 +0000 (0:00:00.070) 0:02:33.088 ********** 2026-03-30 01:01:23.216756 | orchestrator | 2026-03-30 01:01:23.216761 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-30 01:01:23.216765 | orchestrator | Monday 30 March 2026 01:00:50 +0000 (0:00:00.063) 0:02:33.151 ********** 2026-03-30 01:01:23.216769 | orchestrator | 2026-03-30 01:01:23.216773 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-30 01:01:23.216778 | orchestrator | Monday 30 March 2026 01:00:50 +0000 (0:00:00.063) 0:02:33.215 ********** 2026-03-30 01:01:23.216783 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:23.216791 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:23.216796 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:23.216800 | orchestrator | 2026-03-30 01:01:23.216805 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:01:23.216810 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-30 01:01:23.216815 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-30 01:01:23.216819 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-30 01:01:23.216824 | orchestrator | 2026-03-30 01:01:23.216828 | orchestrator | 2026-03-30 01:01:23.216832 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:01:23.216837 | orchestrator | Monday 30 March 2026 01:01:20 +0000 (0:00:30.294) 0:03:03.509 ********** 2026-03-30 01:01:23.216841 | orchestrator | =============================================================================== 2026-03-30 01:01:23.216846 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.68s 2026-03-30 01:01:23.216850 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.29s 2026-03-30 01:01:23.216854 | orchestrator | service-ks-register : glance | Creating services ----------------------- 13.88s 2026-03-30 01:01:23.216859 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.73s 2026-03-30 01:01:23.216863 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.01s 2026-03-30 01:01:23.216867 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 6.40s 2026-03-30 01:01:23.216872 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.07s 2026-03-30 01:01:23.216876 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.90s 2026-03-30 01:01:23.216880 | orchestrator | glance : Copying over config.json files for services -------------------- 4.62s 2026-03-30 01:01:23.216885 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.57s 2026-03-30 01:01:23.216889 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.36s 2026-03-30 01:01:23.216896 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.26s 2026-03-30 01:01:23.216901 | orchestrator | glance : Check glance containers ---------------------------------------- 4.21s 2026-03-30 01:01:23.216905 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.20s 2026-03-30 01:01:23.216910 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.15s 2026-03-30 01:01:23.216914 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.05s 2026-03-30 01:01:23.216918 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.99s 2026-03-30 01:01:23.216923 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.60s 2026-03-30 01:01:23.216928 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.55s 2026-03-30 01:01:23.216932 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.47s 2026-03-30 01:01:23.216936 | orchestrator | 2026-03-30 01:01:23 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:23.216941 | orchestrator | 2026-03-30 01:01:23 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:23.216946 | orchestrator | 2026-03-30 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:26.265646 | orchestrator | 2026-03-30 01:01:26 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:26.266464 | orchestrator | 2026-03-30 01:01:26 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state STARTED 2026-03-30 01:01:26.268363 | orchestrator | 2026-03-30 01:01:26 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:26.270182 | orchestrator | 2026-03-30 01:01:26 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:26.270219 | orchestrator | 2026-03-30 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:29.313476 | orchestrator | 2026-03-30 01:01:29 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:29.315059 | orchestrator | 2026-03-30 01:01:29 | INFO  | Task 70783807-1216-4bf2-8385-138fc1ef8c91 is in state SUCCESS 2026-03-30 01:01:29.316684 | orchestrator | 2026-03-30 01:01:29.316738 | orchestrator | 2026-03-30 01:01:29.316746 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:01:29.316753 | orchestrator | 2026-03-30 01:01:29.316758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:01:29.316764 | orchestrator | Monday 30 March 2026 00:58:11 +0000 (0:00:00.290) 0:00:00.290 ********** 2026-03-30 01:01:29.316769 | orchestrator | ok: [testbed-manager] 2026-03-30 01:01:29.316776 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:01:29.316782 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:01:29.316787 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:01:29.316792 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:01:29.316798 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:01:29.316803 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:01:29.316808 | orchestrator | 2026-03-30 01:01:29.316813 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:01:29.316818 | orchestrator | Monday 30 March 2026 00:58:12 +0000 (0:00:00.719) 0:00:01.009 ********** 2026-03-30 01:01:29.316824 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316829 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316835 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316840 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316845 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316850 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316856 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-30 01:01:29.316861 | orchestrator | 2026-03-30 01:01:29.316866 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-30 01:01:29.316871 | orchestrator | 2026-03-30 01:01:29.316876 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-30 01:01:29.316881 | orchestrator | Monday 30 March 2026 00:58:13 +0000 (0:00:00.713) 0:00:01.723 ********** 2026-03-30 01:01:29.316887 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:01:29.316893 | orchestrator | 2026-03-30 01:01:29.316898 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-30 01:01:29.316903 | orchestrator | Monday 30 March 2026 00:58:14 +0000 (0:00:01.051) 0:00:02.774 ********** 2026-03-30 01:01:29.316919 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 01:01:29.316940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.316972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.316978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.316988 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.316992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.316996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317002 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317086 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 01:01:29.317093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317357 | orchestrator | 2026-03-30 01:01:29.317361 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-30 01:01:29.317364 | orchestrator | Monday 30 March 2026 00:58:18 +0000 (0:00:03.953) 0:00:06.727 ********** 2026-03-30 01:01:29.317367 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:01:29.317371 | orchestrator | 2026-03-30 01:01:29.317374 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-30 01:01:29.317381 | orchestrator | Monday 30 March 2026 00:58:19 +0000 (0:00:01.166) 0:00:07.893 ********** 2026-03-30 01:01:29.317385 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 01:01:29.317390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.317424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317449 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317512 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 01:01:29.317519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.317531 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.317548 | orchestrator | 2026-03-30 01:01:29.317552 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-30 01:01:29.317555 | orchestrator | Monday 30 March 2026 00:58:24 +0000 (0:00:05.317) 0:00:13.211 ********** 2026-03-30 01:01:29.317779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-30 01:01:29.317797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.317807 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.317814 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-30 01:01:29.317824 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.317830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.317841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.317846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.317852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.317860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.317987 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.317993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.317997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318088 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.318092 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.318096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318120 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.318134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318145 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.318149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318332 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.318338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318401 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.318406 | orchestrator | 2026-03-30 01:01:29.318411 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-30 01:01:29.318417 | orchestrator | Monday 30 March 2026 00:58:26 +0000 (0:00:01.445) 0:00:14.657 ********** 2026-03-30 01:01:29.318423 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-30 01:01:29.318428 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318440 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-30 01:01:29.318456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318489 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.318492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318526 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.318531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318540 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.318628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-30 01:01:29.318638 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.318659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318683 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.318693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318724 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.318727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-30 01:01:29.318731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-30 01:01:29.318765 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.318768 | orchestrator | 2026-03-30 01:01:29.318771 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-30 01:01:29.318775 | orchestrator | Monday 30 March 2026 00:58:28 +0000 (0:00:01.931) 0:00:16.589 ********** 2026-03-30 01:01:29.318778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318785 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 01:01:29.318800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318829 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.318832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318869 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318974 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 01:01:29.318979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.318988 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.318998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.319004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.319009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.319015 | orchestrator | 2026-03-30 01:01:29.319020 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-30 01:01:29.319025 | orchestrator | Monday 30 March 2026 00:58:34 +0000 (0:00:05.890) 0:00:22.480 ********** 2026-03-30 01:01:29.319031 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:01:29.319037 | orchestrator | 2026-03-30 01:01:29.319041 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-30 01:01:29.319056 | orchestrator | Monday 30 March 2026 00:58:35 +0000 (0:00:00.943) 0:00:23.423 ********** 2026-03-30 01:01:29.319060 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319069 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319076 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319079 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319083 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319096 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319102 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319108 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319117 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319126 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319132 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319138 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319158 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319162 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096765, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.380422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319166 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319171 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319177 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319180 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319183 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319195 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319199 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319205 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319208 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1096783, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3870387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319213 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319216 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319220 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319223 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319239 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319259 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319265 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319273 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319278 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319284 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319289 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319309 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319318 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1096757, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3783333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319336 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319344 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319349 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319355 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319374 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319383 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319389 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319394 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319402 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319413 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319431 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319435 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319438 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319447 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319453 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319469 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096777, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3849788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319473 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319476 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319479 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319488 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319491 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319505 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319509 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319512 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319515 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319520 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319523 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319529 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096754, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.377384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319545 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319548 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319552 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319557 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319560 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319566 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319571 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319578 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319581 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319586 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319589 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319595 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319602 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319613 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319619 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319623 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096767, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3811882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319643 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319649 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.319655 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319661 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319667 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319674 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.319678 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319682 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.319687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319696 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319743 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319751 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319757 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1096774, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.384168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319793 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319798 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319804 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319809 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319821 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319827 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.319833 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319839 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.319845 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319854 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096769, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3813334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319858 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319861 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319867 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-30 01:01:29.319876 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.319884 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096763, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3793333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319889 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096782, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3863335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319895 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319904 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096789, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3886654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319910 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096781, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3853335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319916 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096756, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3776495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319922 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1096753, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3763332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319933 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096773, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319939 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096770, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3833334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096788, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3881273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-30 01:01:29.319951 | orchestrator | 2026-03-30 01:01:29.319956 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-30 01:01:29.319962 | orchestrator | Monday 30 March 2026 00:58:59 +0000 (0:00:24.168) 0:00:47.592 ********** 2026-03-30 01:01:29.319967 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:01:29.319973 | orchestrator | 2026-03-30 01:01:29.319981 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-30 01:01:29.319986 | orchestrator | Monday 30 March 2026 00:59:00 +0000 (0:00:01.026) 0:00:48.619 ********** 2026-03-30 01:01:29.319992 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.319999 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320005 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320016 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320021 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:01:29.320025 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320029 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320034 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320045 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320050 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:01:29.320056 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320066 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320071 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320081 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320086 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-30 01:01:29.320092 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320102 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320112 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320118 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-30 01:01:29.320123 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320128 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320134 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320144 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320150 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 01:01:29.320155 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320161 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320166 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320172 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320180 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320186 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-30 01:01:29.320191 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320202 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-30 01:01:29.320207 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-30 01:01:29.320212 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-30 01:01:29.320218 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-30 01:01:29.320223 | orchestrator | 2026-03-30 01:01:29.320229 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-30 01:01:29.320234 | orchestrator | Monday 30 March 2026 00:59:02 +0000 (0:00:02.003) 0:00:50.622 ********** 2026-03-30 01:01:29.320239 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-30 01:01:29.320245 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320250 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-30 01:01:29.320255 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320261 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-30 01:01:29.320266 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320271 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-30 01:01:29.320277 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320282 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-30 01:01:29.320287 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320293 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-30 01:01:29.320298 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320308 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-30 01:01:29.320313 | orchestrator | 2026-03-30 01:01:29.320319 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-30 01:01:29.320324 | orchestrator | Monday 30 March 2026 00:59:17 +0000 (0:00:14.853) 0:01:05.476 ********** 2026-03-30 01:01:29.320333 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-30 01:01:29.320338 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-30 01:01:29.320344 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320349 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320354 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-30 01:01:29.320359 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320364 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-30 01:01:29.320370 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320375 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-30 01:01:29.320380 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320385 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-30 01:01:29.320390 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320396 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-30 01:01:29.320401 | orchestrator | 2026-03-30 01:01:29.320407 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-30 01:01:29.320412 | orchestrator | Monday 30 March 2026 00:59:21 +0000 (0:00:04.553) 0:01:10.030 ********** 2026-03-30 01:01:29.320417 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-30 01:01:29.320423 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-30 01:01:29.320429 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320434 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320440 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-30 01:01:29.320445 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320451 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-30 01:01:29.320456 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320462 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-30 01:01:29.320467 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320472 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-30 01:01:29.320478 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320483 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-30 01:01:29.320488 | orchestrator | 2026-03-30 01:01:29.320496 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-30 01:01:29.320502 | orchestrator | Monday 30 March 2026 00:59:24 +0000 (0:00:02.537) 0:01:12.567 ********** 2026-03-30 01:01:29.320507 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:01:29.320513 | orchestrator | 2026-03-30 01:01:29.320518 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-30 01:01:29.320524 | orchestrator | Monday 30 March 2026 00:59:25 +0000 (0:00:01.195) 0:01:13.763 ********** 2026-03-30 01:01:29.320533 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.320539 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320544 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320549 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320554 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320559 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320564 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320569 | orchestrator | 2026-03-30 01:01:29.320574 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-30 01:01:29.320580 | orchestrator | Monday 30 March 2026 00:59:26 +0000 (0:00:00.853) 0:01:14.616 ********** 2026-03-30 01:01:29.320594 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.320599 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320605 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320610 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:29.320615 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320621 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:29.320626 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:29.320631 | orchestrator | 2026-03-30 01:01:29.320637 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-30 01:01:29.320642 | orchestrator | Monday 30 March 2026 00:59:29 +0000 (0:00:02.858) 0:01:17.475 ********** 2026-03-30 01:01:29.320647 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320653 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.320658 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320663 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320668 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320673 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320678 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320684 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320693 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320699 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320719 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320725 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320730 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-30 01:01:29.320736 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320742 | orchestrator | 2026-03-30 01:01:29.320747 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-30 01:01:29.320754 | orchestrator | Monday 30 March 2026 00:59:30 +0000 (0:00:01.617) 0:01:19.092 ********** 2026-03-30 01:01:29.320759 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-30 01:01:29.320765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-30 01:01:29.320770 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-30 01:01:29.320776 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-30 01:01:29.320782 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-30 01:01:29.320787 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320793 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320798 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320805 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320816 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-30 01:01:29.320822 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320828 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-30 01:01:29.320834 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320839 | orchestrator | 2026-03-30 01:01:29.320844 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-30 01:01:29.320850 | orchestrator | Monday 30 March 2026 00:59:33 +0000 (0:00:02.305) 0:01:21.397 ********** 2026-03-30 01:01:29.320855 | orchestrator | [WARNING]: Skipped 2026-03-30 01:01:29.320861 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-30 01:01:29.320867 | orchestrator | due to this access issue: 2026-03-30 01:01:29.320872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-30 01:01:29.320878 | orchestrator | not a directory 2026-03-30 01:01:29.320883 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:01:29.320889 | orchestrator | 2026-03-30 01:01:29.320894 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-30 01:01:29.320899 | orchestrator | Monday 30 March 2026 00:59:34 +0000 (0:00:00.997) 0:01:22.395 ********** 2026-03-30 01:01:29.320905 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.320913 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320919 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320924 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320930 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320935 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320941 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.320947 | orchestrator | 2026-03-30 01:01:29.320952 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-30 01:01:29.320957 | orchestrator | Monday 30 March 2026 00:59:34 +0000 (0:00:00.804) 0:01:23.199 ********** 2026-03-30 01:01:29.320963 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.320968 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:29.320974 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:29.320979 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:29.320984 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:01:29.320989 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:01:29.320995 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:01:29.321000 | orchestrator | 2026-03-30 01:01:29.321006 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-30 01:01:29.321011 | orchestrator | Monday 30 March 2026 00:59:35 +0000 (0:00:00.907) 0:01:24.107 ********** 2026-03-30 01:01:29.321017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321054 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-30 01:01:29.321062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321074 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-30 01:01:29.321080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321118 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321160 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-30 01:01:29.321167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321181 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-30 01:01:29.321211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-30 01:01:29.321231 | orchestrator | 2026-03-30 01:01:29.321237 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-30 01:01:29.321242 | orchestrator | Monday 30 March 2026 00:59:40 +0000 (0:00:04.809) 0:01:28.917 ********** 2026-03-30 01:01:29.321248 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-30 01:01:29.321253 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:01:29.321259 | orchestrator | 2026-03-30 01:01:29.321264 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321269 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.916) 0:01:29.833 ********** 2026-03-30 01:01:29.321275 | orchestrator | 2026-03-30 01:01:29.321280 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321286 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.062) 0:01:29.896 ********** 2026-03-30 01:01:29.321291 | orchestrator | 2026-03-30 01:01:29.321296 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321301 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.071) 0:01:29.967 ********** 2026-03-30 01:01:29.321310 | orchestrator | 2026-03-30 01:01:29.321315 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321321 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.064) 0:01:30.032 ********** 2026-03-30 01:01:29.321326 | orchestrator | 2026-03-30 01:01:29.321331 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321336 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.065) 0:01:30.098 ********** 2026-03-30 01:01:29.321342 | orchestrator | 2026-03-30 01:01:29.321348 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321353 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.072) 0:01:30.170 ********** 2026-03-30 01:01:29.321358 | orchestrator | 2026-03-30 01:01:29.321364 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-30 01:01:29.321369 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.061) 0:01:30.231 ********** 2026-03-30 01:01:29.321374 | orchestrator | 2026-03-30 01:01:29.321379 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-30 01:01:29.321385 | orchestrator | Monday 30 March 2026 00:59:41 +0000 (0:00:00.086) 0:01:30.318 ********** 2026-03-30 01:01:29.321390 | orchestrator | changed: [testbed-manager] 2026-03-30 01:01:29.321395 | orchestrator | 2026-03-30 01:01:29.321400 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-30 01:01:29.321410 | orchestrator | Monday 30 March 2026 01:00:00 +0000 (0:00:18.974) 0:01:49.293 ********** 2026-03-30 01:01:29.321414 | orchestrator | changed: [testbed-manager] 2026-03-30 01:01:29.321417 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:01:29.321420 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:01:29.321423 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:29.321426 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:29.321429 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:29.321433 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:01:29.321436 | orchestrator | 2026-03-30 01:01:29.321439 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-30 01:01:29.321442 | orchestrator | Monday 30 March 2026 01:00:15 +0000 (0:00:14.780) 0:02:04.073 ********** 2026-03-30 01:01:29.321445 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:29.321449 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:29.321452 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:29.321455 | orchestrator | 2026-03-30 01:01:29.321458 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-30 01:01:29.321461 | orchestrator | Monday 30 March 2026 01:00:25 +0000 (0:00:09.641) 0:02:13.715 ********** 2026-03-30 01:01:29.321464 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:29.321467 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:29.321470 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:29.321473 | orchestrator | 2026-03-30 01:01:29.321476 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-30 01:01:29.321480 | orchestrator | Monday 30 March 2026 01:00:35 +0000 (0:00:09.826) 0:02:23.542 ********** 2026-03-30 01:01:29.321483 | orchestrator | changed: [testbed-manager] 2026-03-30 01:01:29.321486 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:01:29.321490 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:29.321493 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:29.321496 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:01:29.321499 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:01:29.321502 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:29.321505 | orchestrator | 2026-03-30 01:01:29.321508 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-30 01:01:29.321511 | orchestrator | Monday 30 March 2026 01:00:48 +0000 (0:00:13.636) 0:02:37.178 ********** 2026-03-30 01:01:29.321514 | orchestrator | changed: [testbed-manager] 2026-03-30 01:01:29.321518 | orchestrator | 2026-03-30 01:01:29.321521 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-30 01:01:29.321527 | orchestrator | Monday 30 March 2026 01:00:55 +0000 (0:00:06.209) 0:02:43.387 ********** 2026-03-30 01:01:29.321530 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:29.321541 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:29.321544 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:29.321548 | orchestrator | 2026-03-30 01:01:29.321551 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-30 01:01:29.321554 | orchestrator | Monday 30 March 2026 01:01:08 +0000 (0:00:13.046) 0:02:56.434 ********** 2026-03-30 01:01:29.321557 | orchestrator | changed: [testbed-manager] 2026-03-30 01:01:29.321560 | orchestrator | 2026-03-30 01:01:29.321563 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-30 01:01:29.321567 | orchestrator | Monday 30 March 2026 01:01:14 +0000 (0:00:06.851) 0:03:03.285 ********** 2026-03-30 01:01:29.321570 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:01:29.321573 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:01:29.321581 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:01:29.321586 | orchestrator | 2026-03-30 01:01:29.321594 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:01:29.321602 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-30 01:01:29.321610 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-30 01:01:29.321615 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-30 01:01:29.321619 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-30 01:01:29.321623 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 01:01:29.321629 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 01:01:29.321633 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-30 01:01:29.321638 | orchestrator | 2026-03-30 01:01:29.321643 | orchestrator | 2026-03-30 01:01:29.321647 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:01:29.321652 | orchestrator | Monday 30 March 2026 01:01:25 +0000 (0:00:11.015) 0:03:14.301 ********** 2026-03-30 01:01:29.321657 | orchestrator | =============================================================================== 2026-03-30 01:01:29.321662 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.17s 2026-03-30 01:01:29.321667 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.97s 2026-03-30 01:01:29.321672 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.85s 2026-03-30 01:01:29.321676 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.78s 2026-03-30 01:01:29.321681 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.64s 2026-03-30 01:01:29.321690 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.05s 2026-03-30 01:01:29.321695 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.02s 2026-03-30 01:01:29.321700 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.83s 2026-03-30 01:01:29.321718 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.64s 2026-03-30 01:01:29.321724 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.85s 2026-03-30 01:01:29.321741 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.21s 2026-03-30 01:01:29.321744 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.89s 2026-03-30 01:01:29.321748 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.32s 2026-03-30 01:01:29.321751 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.81s 2026-03-30 01:01:29.321754 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.55s 2026-03-30 01:01:29.321757 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.95s 2026-03-30 01:01:29.321760 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.86s 2026-03-30 01:01:29.321763 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.54s 2026-03-30 01:01:29.321766 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.31s 2026-03-30 01:01:29.321770 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.00s 2026-03-30 01:01:29.321773 | orchestrator | 2026-03-30 01:01:29 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:29.321776 | orchestrator | 2026-03-30 01:01:29 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:29.321779 | orchestrator | 2026-03-30 01:01:29 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:29.321782 | orchestrator | 2026-03-30 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:32.365319 | orchestrator | 2026-03-30 01:01:32 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:32.366786 | orchestrator | 2026-03-30 01:01:32 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:32.368324 | orchestrator | 2026-03-30 01:01:32 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:32.369553 | orchestrator | 2026-03-30 01:01:32 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:32.369983 | orchestrator | 2026-03-30 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:35.408657 | orchestrator | 2026-03-30 01:01:35 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:35.410376 | orchestrator | 2026-03-30 01:01:35 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:35.412111 | orchestrator | 2026-03-30 01:01:35 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:35.417435 | orchestrator | 2026-03-30 01:01:35 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:35.417518 | orchestrator | 2026-03-30 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:38.463407 | orchestrator | 2026-03-30 01:01:38 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:38.464984 | orchestrator | 2026-03-30 01:01:38 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:38.466640 | orchestrator | 2026-03-30 01:01:38 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:38.468230 | orchestrator | 2026-03-30 01:01:38 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:38.469242 | orchestrator | 2026-03-30 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:41.517183 | orchestrator | 2026-03-30 01:01:41 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:41.517256 | orchestrator | 2026-03-30 01:01:41 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:41.517293 | orchestrator | 2026-03-30 01:01:41 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:41.517305 | orchestrator | 2026-03-30 01:01:41 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:41.517327 | orchestrator | 2026-03-30 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:44.560402 | orchestrator | 2026-03-30 01:01:44 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:44.561593 | orchestrator | 2026-03-30 01:01:44 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:44.562404 | orchestrator | 2026-03-30 01:01:44 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:44.565089 | orchestrator | 2026-03-30 01:01:44 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:44.565190 | orchestrator | 2026-03-30 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:47.613654 | orchestrator | 2026-03-30 01:01:47 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:47.613737 | orchestrator | 2026-03-30 01:01:47 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:47.613746 | orchestrator | 2026-03-30 01:01:47 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:47.614130 | orchestrator | 2026-03-30 01:01:47 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:47.614622 | orchestrator | 2026-03-30 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:50.658097 | orchestrator | 2026-03-30 01:01:50 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state STARTED 2026-03-30 01:01:50.659141 | orchestrator | 2026-03-30 01:01:50 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:50.661083 | orchestrator | 2026-03-30 01:01:50 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:50.662579 | orchestrator | 2026-03-30 01:01:50 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:50.662751 | orchestrator | 2026-03-30 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:53.720772 | orchestrator | 2026-03-30 01:01:53 | INFO  | Task f89f265d-5553-4766-bfb1-f90a19e8132b is in state SUCCESS 2026-03-30 01:01:53.721619 | orchestrator | 2026-03-30 01:01:53.721671 | orchestrator | 2026-03-30 01:01:53.721681 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:01:53.721688 | orchestrator | 2026-03-30 01:01:53.721695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:01:53.721702 | orchestrator | Monday 30 March 2026 00:58:45 +0000 (0:00:00.590) 0:00:00.590 ********** 2026-03-30 01:01:53.721708 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:01:53.721715 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:01:53.721739 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:01:53.721759 | orchestrator | 2026-03-30 01:01:53.721766 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:01:53.721772 | orchestrator | Monday 30 March 2026 00:58:45 +0000 (0:00:00.513) 0:00:01.103 ********** 2026-03-30 01:01:53.721779 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-30 01:01:53.721795 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-30 01:01:53.721801 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-30 01:01:53.721808 | orchestrator | 2026-03-30 01:01:53.721814 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-30 01:01:53.721820 | orchestrator | 2026-03-30 01:01:53.721826 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-30 01:01:53.721847 | orchestrator | Monday 30 March 2026 00:58:46 +0000 (0:00:00.338) 0:00:01.442 ********** 2026-03-30 01:01:53.721853 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:01:53.721860 | orchestrator | 2026-03-30 01:01:53.721866 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-30 01:01:53.721873 | orchestrator | Monday 30 March 2026 00:58:46 +0000 (0:00:00.688) 0:00:02.130 ********** 2026-03-30 01:01:53.721880 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-30 01:01:53.721886 | orchestrator | 2026-03-30 01:01:53.721892 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-30 01:01:53.721898 | orchestrator | Monday 30 March 2026 00:58:50 +0000 (0:00:03.980) 0:00:06.110 ********** 2026-03-30 01:01:53.721951 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-30 01:01:53.721957 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-30 01:01:53.721965 | orchestrator | 2026-03-30 01:01:53.721971 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-30 01:01:53.721977 | orchestrator | Monday 30 March 2026 00:58:58 +0000 (0:00:07.797) 0:00:13.907 ********** 2026-03-30 01:01:53.721984 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:01:53.721990 | orchestrator | 2026-03-30 01:01:53.721996 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-30 01:01:53.722002 | orchestrator | Monday 30 March 2026 00:59:02 +0000 (0:00:03.618) 0:00:17.526 ********** 2026-03-30 01:01:53.722245 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-30 01:01:53.722253 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:01:53.722259 | orchestrator | 2026-03-30 01:01:53.722265 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-30 01:01:53.722271 | orchestrator | Monday 30 March 2026 00:59:06 +0000 (0:00:03.993) 0:00:21.520 ********** 2026-03-30 01:01:53.722277 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:01:53.722284 | orchestrator | 2026-03-30 01:01:53.722290 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-30 01:01:53.722296 | orchestrator | Monday 30 March 2026 00:59:09 +0000 (0:00:03.627) 0:00:25.147 ********** 2026-03-30 01:01:53.722302 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-30 01:01:53.722309 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-30 01:01:53.722439 | orchestrator | 2026-03-30 01:01:53.722445 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-30 01:01:53.722452 | orchestrator | Monday 30 March 2026 00:59:17 +0000 (0:00:08.223) 0:00:33.371 ********** 2026-03-30 01:01:53.722460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.722491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.722519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.722541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.722604 | orchestrator | 2026-03-30 01:01:53.722611 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-30 01:01:53.722618 | orchestrator | Monday 30 March 2026 00:59:21 +0000 (0:00:03.948) 0:00:37.320 ********** 2026-03-30 01:01:53.722629 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.722635 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.722667 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.722676 | orchestrator | 2026-03-30 01:01:53.722694 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-30 01:01:53.722700 | orchestrator | Monday 30 March 2026 00:59:22 +0000 (0:00:00.493) 0:00:37.813 ********** 2026-03-30 01:01:53.722707 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:01:53.722724 | orchestrator | 2026-03-30 01:01:53.722730 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-30 01:01:53.722747 | orchestrator | Monday 30 March 2026 00:59:23 +0000 (0:00:01.272) 0:00:39.086 ********** 2026-03-30 01:01:53.722770 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-30 01:01:53.722777 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-30 01:01:53.722783 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-30 01:01:53.722789 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-30 01:01:53.722796 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-30 01:01:53.722802 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-30 01:01:53.722815 | orchestrator | 2026-03-30 01:01:53.722821 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-30 01:01:53.722827 | orchestrator | Monday 30 March 2026 00:59:26 +0000 (0:00:02.724) 0:00:41.811 ********** 2026-03-30 01:01:53.722837 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-30 01:01:53.722845 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-30 01:01:53.722852 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-30 01:01:53.722864 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-30 01:01:53.722885 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-30 01:01:53.722895 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-30 01:01:53.722901 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-30 01:01:53.722907 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-30 01:01:53.722919 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-30 01:01:53.722939 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-30 01:01:53.722949 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-30 01:01:53.722956 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-30 01:01:53.722962 | orchestrator | 2026-03-30 01:01:53.722968 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-30 01:01:53.722974 | orchestrator | Monday 30 March 2026 00:59:30 +0000 (0:00:04.163) 0:00:45.974 ********** 2026-03-30 01:01:53.722981 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:53.722988 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:53.722994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-30 01:01:53.723000 | orchestrator | 2026-03-30 01:01:53.723006 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-30 01:01:53.723012 | orchestrator | Monday 30 March 2026 00:59:32 +0000 (0:00:01.560) 0:00:47.534 ********** 2026-03-30 01:01:53.723019 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-30 01:01:53.723029 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-30 01:01:53.723036 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-30 01:01:53.723042 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 01:01:53.723048 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 01:01:53.723054 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-30 01:01:53.723061 | orchestrator | 2026-03-30 01:01:53.723067 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-30 01:01:53.723074 | orchestrator | Monday 30 March 2026 00:59:35 +0000 (0:00:03.282) 0:00:50.817 ********** 2026-03-30 01:01:53.723081 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-30 01:01:53.723088 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-30 01:01:53.723094 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-30 01:01:53.723118 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-30 01:01:53.723125 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-30 01:01:53.723132 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-30 01:01:53.723139 | orchestrator | 2026-03-30 01:01:53.723146 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-30 01:01:53.723153 | orchestrator | Monday 30 March 2026 00:59:36 +0000 (0:00:01.255) 0:00:52.072 ********** 2026-03-30 01:01:53.723160 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.723167 | orchestrator | 2026-03-30 01:01:53.723174 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-30 01:01:53.723180 | orchestrator | Monday 30 March 2026 00:59:36 +0000 (0:00:00.174) 0:00:52.247 ********** 2026-03-30 01:01:53.723187 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.723193 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.723200 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.723206 | orchestrator | 2026-03-30 01:01:53.723212 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-30 01:01:53.723218 | orchestrator | Monday 30 March 2026 00:59:37 +0000 (0:00:00.440) 0:00:52.688 ********** 2026-03-30 01:01:53.723225 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:01:53.723231 | orchestrator | 2026-03-30 01:01:53.723237 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-30 01:01:53.723259 | orchestrator | Monday 30 March 2026 00:59:37 +0000 (0:00:00.510) 0:00:53.198 ********** 2026-03-30 01:01:53.723269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.723277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.723292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.723299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723393 | orchestrator | 2026-03-30 01:01:53.723412 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-30 01:01:53.723418 | orchestrator | Monday 30 March 2026 00:59:42 +0000 (0:00:04.481) 0:00:57.680 ********** 2026-03-30 01:01:53.723425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.723435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723474 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.723495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.723506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723532 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.723539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.723546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723578 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.723586 | orchestrator | 2026-03-30 01:01:53.723592 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-30 01:01:53.723599 | orchestrator | Monday 30 March 2026 00:59:43 +0000 (0:00:01.141) 0:00:58.821 ********** 2026-03-30 01:01:53.723606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.723614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723639 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.723647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.723694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723718 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.723725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.723737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.723769 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.723776 | orchestrator | 2026-03-30 01:01:53.723783 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-30 01:01:53.723848 | orchestrator | Monday 30 March 2026 00:59:44 +0000 (0:00:01.336) 0:01:00.158 ********** 2026-03-30 01:01:53.723855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.723878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.723893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.723904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.723976 | orchestrator | 2026-03-30 01:01:53.723982 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-30 01:01:53.723989 | orchestrator | Monday 30 March 2026 00:59:48 +0000 (0:00:04.238) 0:01:04.396 ********** 2026-03-30 01:01:53.723995 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-30 01:01:53.724001 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-30 01:01:53.724007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-30 01:01:53.724013 | orchestrator | 2026-03-30 01:01:53.724020 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-30 01:01:53.724026 | orchestrator | Monday 30 March 2026 00:59:51 +0000 (0:00:02.479) 0:01:06.876 ********** 2026-03-30 01:01:53.724035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.724048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.724055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.724062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724132 | orchestrator | 2026-03-30 01:01:53.724138 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-30 01:01:53.724144 | orchestrator | Monday 30 March 2026 01:00:07 +0000 (0:00:16.034) 0:01:22.910 ********** 2026-03-30 01:01:53.724151 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724157 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:53.724163 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:53.724169 | orchestrator | 2026-03-30 01:01:53.724176 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-30 01:01:53.724184 | orchestrator | Monday 30 March 2026 01:00:09 +0000 (0:00:02.302) 0:01:25.213 ********** 2026-03-30 01:01:53.724190 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724196 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:53.724203 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:53.724209 | orchestrator | 2026-03-30 01:01:53.724215 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-30 01:01:53.724221 | orchestrator | Monday 30 March 2026 01:00:11 +0000 (0:00:01.640) 0:01:26.853 ********** 2026-03-30 01:01:53.724230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.724236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724274 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.724293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.724304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724322 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.724328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-30 01:01:53.724339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-30 01:01:53.724364 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.724370 | orchestrator | 2026-03-30 01:01:53.724375 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-30 01:01:53.724382 | orchestrator | Monday 30 March 2026 01:00:12 +0000 (0:00:00.639) 0:01:27.492 ********** 2026-03-30 01:01:53.724387 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.724393 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.724399 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.724416 | orchestrator | 2026-03-30 01:01:53.724422 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-30 01:01:53.724428 | orchestrator | Monday 30 March 2026 01:00:12 +0000 (0:00:00.274) 0:01:27.766 ********** 2026-03-30 01:01:53.724434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.724444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.724450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-30 01:01:53.724461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-30 01:01:53.724539 | orchestrator | 2026-03-30 01:01:53.724545 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-30 01:01:53.724551 | orchestrator | Monday 30 March 2026 01:00:15 +0000 (0:00:02.765) 0:01:30.532 ********** 2026-03-30 01:01:53.724558 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.724564 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:01:53.724570 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:01:53.724576 | orchestrator | 2026-03-30 01:01:53.724581 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-30 01:01:53.724586 | orchestrator | Monday 30 March 2026 01:00:15 +0000 (0:00:00.242) 0:01:30.775 ********** 2026-03-30 01:01:53.724593 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724599 | orchestrator | 2026-03-30 01:01:53.724605 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-30 01:01:53.724612 | orchestrator | Monday 30 March 2026 01:00:17 +0000 (0:00:02.031) 0:01:32.806 ********** 2026-03-30 01:01:53.724618 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724624 | orchestrator | 2026-03-30 01:01:53.724631 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-30 01:01:53.724638 | orchestrator | Monday 30 March 2026 01:00:19 +0000 (0:00:02.201) 0:01:35.008 ********** 2026-03-30 01:01:53.724644 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724650 | orchestrator | 2026-03-30 01:01:53.724667 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-30 01:01:53.724674 | orchestrator | Monday 30 March 2026 01:00:42 +0000 (0:00:22.988) 0:01:57.996 ********** 2026-03-30 01:01:53.724681 | orchestrator | 2026-03-30 01:01:53.724687 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-30 01:01:53.724693 | orchestrator | Monday 30 March 2026 01:00:42 +0000 (0:00:00.063) 0:01:58.059 ********** 2026-03-30 01:01:53.724700 | orchestrator | 2026-03-30 01:01:53.724706 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-30 01:01:53.724714 | orchestrator | Monday 30 March 2026 01:00:42 +0000 (0:00:00.066) 0:01:58.125 ********** 2026-03-30 01:01:53.724721 | orchestrator | 2026-03-30 01:01:53.724728 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-30 01:01:53.724735 | orchestrator | Monday 30 March 2026 01:00:42 +0000 (0:00:00.069) 0:01:58.195 ********** 2026-03-30 01:01:53.724741 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724764 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:53.724795 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:53.724803 | orchestrator | 2026-03-30 01:01:53.724810 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-30 01:01:53.724817 | orchestrator | Monday 30 March 2026 01:01:08 +0000 (0:00:25.273) 0:02:23.469 ********** 2026-03-30 01:01:53.724824 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724831 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:53.724837 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:53.724844 | orchestrator | 2026-03-30 01:01:53.724851 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-30 01:01:53.724857 | orchestrator | Monday 30 March 2026 01:01:14 +0000 (0:00:06.048) 0:02:29.517 ********** 2026-03-30 01:01:53.724864 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724870 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:53.724876 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:53.724883 | orchestrator | 2026-03-30 01:01:53.724890 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-30 01:01:53.724902 | orchestrator | Monday 30 March 2026 01:01:39 +0000 (0:00:25.870) 0:02:55.388 ********** 2026-03-30 01:01:53.724908 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:01:53.724915 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:01:53.724922 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:01:53.724934 | orchestrator | 2026-03-30 01:01:53.724940 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-30 01:01:53.724947 | orchestrator | Monday 30 March 2026 01:01:50 +0000 (0:00:10.653) 0:03:06.041 ********** 2026-03-30 01:01:53.724954 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:01:53.724960 | orchestrator | 2026-03-30 01:01:53.724967 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:01:53.724978 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-30 01:01:53.724986 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:01:53.724992 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:01:53.724998 | orchestrator | 2026-03-30 01:01:53.725004 | orchestrator | 2026-03-30 01:01:53.725011 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:01:53.725017 | orchestrator | Monday 30 March 2026 01:01:50 +0000 (0:00:00.237) 0:03:06.279 ********** 2026-03-30 01:01:53.725023 | orchestrator | =============================================================================== 2026-03-30 01:01:53.725030 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.87s 2026-03-30 01:01:53.725037 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.27s 2026-03-30 01:01:53.725043 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.99s 2026-03-30 01:01:53.725050 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.03s 2026-03-30 01:01:53.725056 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.65s 2026-03-30 01:01:53.725063 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.22s 2026-03-30 01:01:53.725070 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.80s 2026-03-30 01:01:53.725076 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.05s 2026-03-30 01:01:53.725082 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.48s 2026-03-30 01:01:53.725089 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.24s 2026-03-30 01:01:53.725095 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.16s 2026-03-30 01:01:53.725102 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.99s 2026-03-30 01:01:53.725108 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.98s 2026-03-30 01:01:53.725115 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.95s 2026-03-30 01:01:53.725121 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.63s 2026-03-30 01:01:53.725128 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.62s 2026-03-30 01:01:53.725134 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.28s 2026-03-30 01:01:53.725141 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.77s 2026-03-30 01:01:53.725147 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.72s 2026-03-30 01:01:53.725154 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.48s 2026-03-30 01:01:53.725160 | orchestrator | 2026-03-30 01:01:53 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:01:53.725873 | orchestrator | 2026-03-30 01:01:53 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:53.726974 | orchestrator | 2026-03-30 01:01:53 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:53.728439 | orchestrator | 2026-03-30 01:01:53 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:53.728615 | orchestrator | 2026-03-30 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:56.789701 | orchestrator | 2026-03-30 01:01:56 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:01:56.796072 | orchestrator | 2026-03-30 01:01:56 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:56.796346 | orchestrator | 2026-03-30 01:01:56 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:56.800039 | orchestrator | 2026-03-30 01:01:56 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:56.800083 | orchestrator | 2026-03-30 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:01:59.844101 | orchestrator | 2026-03-30 01:01:59 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:01:59.844949 | orchestrator | 2026-03-30 01:01:59 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:01:59.845790 | orchestrator | 2026-03-30 01:01:59 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:01:59.847886 | orchestrator | 2026-03-30 01:01:59 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:01:59.847921 | orchestrator | 2026-03-30 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:02.890816 | orchestrator | 2026-03-30 01:02:02 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:02.892445 | orchestrator | 2026-03-30 01:02:02 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:02.894535 | orchestrator | 2026-03-30 01:02:02 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:02.895964 | orchestrator | 2026-03-30 01:02:02 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:02.895997 | orchestrator | 2026-03-30 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:05.936130 | orchestrator | 2026-03-30 01:02:05 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:05.936324 | orchestrator | 2026-03-30 01:02:05 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:05.937357 | orchestrator | 2026-03-30 01:02:05 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:05.937848 | orchestrator | 2026-03-30 01:02:05 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:05.937880 | orchestrator | 2026-03-30 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:08.996918 | orchestrator | 2026-03-30 01:02:08 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:08.999443 | orchestrator | 2026-03-30 01:02:09 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:09.000897 | orchestrator | 2026-03-30 01:02:09 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:09.002309 | orchestrator | 2026-03-30 01:02:09 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:09.002354 | orchestrator | 2026-03-30 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:12.042780 | orchestrator | 2026-03-30 01:02:12 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:12.044588 | orchestrator | 2026-03-30 01:02:12 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:12.046669 | orchestrator | 2026-03-30 01:02:12 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:12.047846 | orchestrator | 2026-03-30 01:02:12 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:12.047884 | orchestrator | 2026-03-30 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:15.078798 | orchestrator | 2026-03-30 01:02:15 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:15.080279 | orchestrator | 2026-03-30 01:02:15 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:15.082120 | orchestrator | 2026-03-30 01:02:15 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:15.083716 | orchestrator | 2026-03-30 01:02:15 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:15.083750 | orchestrator | 2026-03-30 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:18.121160 | orchestrator | 2026-03-30 01:02:18 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:18.122240 | orchestrator | 2026-03-30 01:02:18 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:18.124016 | orchestrator | 2026-03-30 01:02:18 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:18.125057 | orchestrator | 2026-03-30 01:02:18 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:18.125099 | orchestrator | 2026-03-30 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:21.160883 | orchestrator | 2026-03-30 01:02:21 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:21.162920 | orchestrator | 2026-03-30 01:02:21 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:21.163424 | orchestrator | 2026-03-30 01:02:21 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:21.166084 | orchestrator | 2026-03-30 01:02:21 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:21.166126 | orchestrator | 2026-03-30 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:24.209572 | orchestrator | 2026-03-30 01:02:24 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:24.209940 | orchestrator | 2026-03-30 01:02:24 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:24.210436 | orchestrator | 2026-03-30 01:02:24 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:24.211055 | orchestrator | 2026-03-30 01:02:24 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:24.211073 | orchestrator | 2026-03-30 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:27.242114 | orchestrator | 2026-03-30 01:02:27 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:27.242213 | orchestrator | 2026-03-30 01:02:27 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:27.242893 | orchestrator | 2026-03-30 01:02:27 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:27.243333 | orchestrator | 2026-03-30 01:02:27 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:27.243355 | orchestrator | 2026-03-30 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:30.271746 | orchestrator | 2026-03-30 01:02:30 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:30.272491 | orchestrator | 2026-03-30 01:02:30 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:30.274124 | orchestrator | 2026-03-30 01:02:30 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:30.274794 | orchestrator | 2026-03-30 01:02:30 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:30.274823 | orchestrator | 2026-03-30 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:33.306921 | orchestrator | 2026-03-30 01:02:33 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:33.306990 | orchestrator | 2026-03-30 01:02:33 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:33.307556 | orchestrator | 2026-03-30 01:02:33 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:33.307939 | orchestrator | 2026-03-30 01:02:33 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:33.307965 | orchestrator | 2026-03-30 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:36.346779 | orchestrator | 2026-03-30 01:02:36 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:36.346908 | orchestrator | 2026-03-30 01:02:36 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:36.348286 | orchestrator | 2026-03-30 01:02:36 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:36.348881 | orchestrator | 2026-03-30 01:02:36 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:36.348908 | orchestrator | 2026-03-30 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:39.404240 | orchestrator | 2026-03-30 01:02:39 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:39.406237 | orchestrator | 2026-03-30 01:02:39 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:39.407012 | orchestrator | 2026-03-30 01:02:39 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:39.407473 | orchestrator | 2026-03-30 01:02:39 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:39.407502 | orchestrator | 2026-03-30 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:42.431919 | orchestrator | 2026-03-30 01:02:42 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:42.432454 | orchestrator | 2026-03-30 01:02:42 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:42.433139 | orchestrator | 2026-03-30 01:02:42 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:42.434857 | orchestrator | 2026-03-30 01:02:42 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:42.434905 | orchestrator | 2026-03-30 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:45.470292 | orchestrator | 2026-03-30 01:02:45 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:45.470382 | orchestrator | 2026-03-30 01:02:45 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:45.471173 | orchestrator | 2026-03-30 01:02:45 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:45.471811 | orchestrator | 2026-03-30 01:02:45 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:45.471870 | orchestrator | 2026-03-30 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:48.549429 | orchestrator | 2026-03-30 01:02:48 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:48.550952 | orchestrator | 2026-03-30 01:02:48 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:48.551722 | orchestrator | 2026-03-30 01:02:48 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:48.552302 | orchestrator | 2026-03-30 01:02:48 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:48.552399 | orchestrator | 2026-03-30 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:51.587042 | orchestrator | 2026-03-30 01:02:51 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:51.589458 | orchestrator | 2026-03-30 01:02:51 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:51.589717 | orchestrator | 2026-03-30 01:02:51 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:51.590445 | orchestrator | 2026-03-30 01:02:51 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:51.590485 | orchestrator | 2026-03-30 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:54.621918 | orchestrator | 2026-03-30 01:02:54 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:54.622116 | orchestrator | 2026-03-30 01:02:54 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:54.622958 | orchestrator | 2026-03-30 01:02:54 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:54.624096 | orchestrator | 2026-03-30 01:02:54 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:54.624142 | orchestrator | 2026-03-30 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:02:57.645926 | orchestrator | 2026-03-30 01:02:57 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:02:57.645995 | orchestrator | 2026-03-30 01:02:57 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:02:57.646576 | orchestrator | 2026-03-30 01:02:57 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:02:57.647558 | orchestrator | 2026-03-30 01:02:57 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:02:57.647594 | orchestrator | 2026-03-30 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:00.674007 | orchestrator | 2026-03-30 01:03:00 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:00.674236 | orchestrator | 2026-03-30 01:03:00 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:00.674745 | orchestrator | 2026-03-30 01:03:00 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:00.675247 | orchestrator | 2026-03-30 01:03:00 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:00.675271 | orchestrator | 2026-03-30 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:03.707643 | orchestrator | 2026-03-30 01:03:03 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:03.707985 | orchestrator | 2026-03-30 01:03:03 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:03.708932 | orchestrator | 2026-03-30 01:03:03 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:03.709694 | orchestrator | 2026-03-30 01:03:03 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:03.709823 | orchestrator | 2026-03-30 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:06.746277 | orchestrator | 2026-03-30 01:03:06 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:06.746412 | orchestrator | 2026-03-30 01:03:06 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:06.747477 | orchestrator | 2026-03-30 01:03:06 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:06.747913 | orchestrator | 2026-03-30 01:03:06 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:06.747969 | orchestrator | 2026-03-30 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:09.777620 | orchestrator | 2026-03-30 01:03:09 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:09.777722 | orchestrator | 2026-03-30 01:03:09 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:09.778216 | orchestrator | 2026-03-30 01:03:09 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:09.778936 | orchestrator | 2026-03-30 01:03:09 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:09.778985 | orchestrator | 2026-03-30 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:12.804865 | orchestrator | 2026-03-30 01:03:12 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:12.805258 | orchestrator | 2026-03-30 01:03:12 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:12.805892 | orchestrator | 2026-03-30 01:03:12 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:12.806767 | orchestrator | 2026-03-30 01:03:12 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:12.806796 | orchestrator | 2026-03-30 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:15.832373 | orchestrator | 2026-03-30 01:03:15 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:15.832459 | orchestrator | 2026-03-30 01:03:15 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:15.833101 | orchestrator | 2026-03-30 01:03:15 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:15.833813 | orchestrator | 2026-03-30 01:03:15 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:15.833861 | orchestrator | 2026-03-30 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:18.867690 | orchestrator | 2026-03-30 01:03:18 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:18.868386 | orchestrator | 2026-03-30 01:03:18 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:18.869171 | orchestrator | 2026-03-30 01:03:18 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:18.873836 | orchestrator | 2026-03-30 01:03:18 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:18.873891 | orchestrator | 2026-03-30 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:21.921804 | orchestrator | 2026-03-30 01:03:21 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:21.923378 | orchestrator | 2026-03-30 01:03:21 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:21.924070 | orchestrator | 2026-03-30 01:03:21 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:21.924959 | orchestrator | 2026-03-30 01:03:21 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:21.924984 | orchestrator | 2026-03-30 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:24.959549 | orchestrator | 2026-03-30 01:03:24 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:24.959914 | orchestrator | 2026-03-30 01:03:24 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:24.961061 | orchestrator | 2026-03-30 01:03:24 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:24.961517 | orchestrator | 2026-03-30 01:03:24 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:24.961547 | orchestrator | 2026-03-30 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:27.988346 | orchestrator | 2026-03-30 01:03:27 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:27.989346 | orchestrator | 2026-03-30 01:03:27 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:27.993021 | orchestrator | 2026-03-30 01:03:27 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:27.993817 | orchestrator | 2026-03-30 01:03:27 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:27.993872 | orchestrator | 2026-03-30 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:31.028546 | orchestrator | 2026-03-30 01:03:31 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:31.030861 | orchestrator | 2026-03-30 01:03:31 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state STARTED 2026-03-30 01:03:31.033663 | orchestrator | 2026-03-30 01:03:31 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:31.034474 | orchestrator | 2026-03-30 01:03:31 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:31.036747 | orchestrator | 2026-03-30 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:34.063348 | orchestrator | 2026-03-30 01:03:34 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:34.064578 | orchestrator | 2026-03-30 01:03:34 | INFO  | Task 50518665-6fe1-4511-931b-96572546aba9 is in state SUCCESS 2026-03-30 01:03:34.065757 | orchestrator | 2026-03-30 01:03:34.065780 | orchestrator | 2026-03-30 01:03:34.065785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:03:34.065789 | orchestrator | 2026-03-30 01:03:34.065792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:03:34.065796 | orchestrator | Monday 30 March 2026 01:01:29 +0000 (0:00:00.306) 0:00:00.306 ********** 2026-03-30 01:03:34.065799 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:03:34.065803 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:03:34.065806 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:03:34.065838 | orchestrator | 2026-03-30 01:03:34.065844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:03:34.065849 | orchestrator | Monday 30 March 2026 01:01:29 +0000 (0:00:00.274) 0:00:00.581 ********** 2026-03-30 01:03:34.065854 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-30 01:03:34.065874 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-30 01:03:34.065879 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-30 01:03:34.065883 | orchestrator | 2026-03-30 01:03:34.065888 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-30 01:03:34.065892 | orchestrator | 2026-03-30 01:03:34.065897 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-30 01:03:34.065902 | orchestrator | Monday 30 March 2026 01:01:30 +0000 (0:00:00.303) 0:00:00.885 ********** 2026-03-30 01:03:34.065907 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:03:34.065912 | orchestrator | 2026-03-30 01:03:34.065917 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-30 01:03:34.065922 | orchestrator | Monday 30 March 2026 01:01:30 +0000 (0:00:00.617) 0:00:01.503 ********** 2026-03-30 01:03:34.065927 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-30 01:03:34.065931 | orchestrator | 2026-03-30 01:03:34.065937 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-30 01:03:34.065941 | orchestrator | Monday 30 March 2026 01:01:34 +0000 (0:00:03.569) 0:00:05.072 ********** 2026-03-30 01:03:34.065946 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-30 01:03:34.065951 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-30 01:03:34.065956 | orchestrator | 2026-03-30 01:03:34.065960 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-30 01:03:34.065965 | orchestrator | Monday 30 March 2026 01:01:41 +0000 (0:00:07.100) 0:00:12.173 ********** 2026-03-30 01:03:34.065969 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:03:34.065974 | orchestrator | 2026-03-30 01:03:34.065979 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-30 01:03:34.065983 | orchestrator | Monday 30 March 2026 01:01:45 +0000 (0:00:03.660) 0:00:15.833 ********** 2026-03-30 01:03:34.065988 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-30 01:03:34.065993 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:03:34.065998 | orchestrator | 2026-03-30 01:03:34.066004 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-30 01:03:34.066008 | orchestrator | Monday 30 March 2026 01:01:49 +0000 (0:00:04.039) 0:00:19.873 ********** 2026-03-30 01:03:34.066031 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:03:34.066038 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-30 01:03:34.066043 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-30 01:03:34.066049 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-30 01:03:34.066054 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-30 01:03:34.066059 | orchestrator | 2026-03-30 01:03:34.066064 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-30 01:03:34.066069 | orchestrator | Monday 30 March 2026 01:02:05 +0000 (0:00:15.826) 0:00:35.700 ********** 2026-03-30 01:03:34.066074 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-30 01:03:34.066079 | orchestrator | 2026-03-30 01:03:34.066084 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-30 01:03:34.066089 | orchestrator | Monday 30 March 2026 01:02:09 +0000 (0:00:04.121) 0:00:39.821 ********** 2026-03-30 01:03:34.066104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066309 | orchestrator | 2026-03-30 01:03:34.066314 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-30 01:03:34.066319 | orchestrator | Monday 30 March 2026 01:02:11 +0000 (0:00:02.370) 0:00:42.192 ********** 2026-03-30 01:03:34.066324 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-30 01:03:34.066329 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-30 01:03:34.066334 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-30 01:03:34.066339 | orchestrator | 2026-03-30 01:03:34.066344 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-30 01:03:34.066350 | orchestrator | Monday 30 March 2026 01:02:12 +0000 (0:00:01.018) 0:00:43.211 ********** 2026-03-30 01:03:34.066355 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.066360 | orchestrator | 2026-03-30 01:03:34.066365 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-30 01:03:34.066371 | orchestrator | Monday 30 March 2026 01:02:12 +0000 (0:00:00.112) 0:00:43.324 ********** 2026-03-30 01:03:34.066376 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.066381 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:03:34.066386 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:03:34.066391 | orchestrator | 2026-03-30 01:03:34.066396 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-30 01:03:34.066401 | orchestrator | Monday 30 March 2026 01:02:12 +0000 (0:00:00.249) 0:00:43.573 ********** 2026-03-30 01:03:34.066406 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:03:34.066409 | orchestrator | 2026-03-30 01:03:34.066412 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-30 01:03:34.066415 | orchestrator | Monday 30 March 2026 01:02:13 +0000 (0:00:00.966) 0:00:44.540 ********** 2026-03-30 01:03:34.066424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066548 | orchestrator | 2026-03-30 01:03:34.066553 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-30 01:03:34.066558 | orchestrator | Monday 30 March 2026 01:02:18 +0000 (0:00:04.195) 0:00:48.736 ********** 2026-03-30 01:03:34.066563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.066569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066587 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.066596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.066601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066612 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:03:34.066618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.066627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066641 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:03:34.066646 | orchestrator | 2026-03-30 01:03:34.066652 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-30 01:03:34.066656 | orchestrator | Monday 30 March 2026 01:02:18 +0000 (0:00:00.631) 0:00:49.367 ********** 2026-03-30 01:03:34.066665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.066670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066683 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.066688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.066695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066706 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:03:34.066715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.066720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.066735 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:03:34.066740 | orchestrator | 2026-03-30 01:03:34.066744 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-30 01:03:34.066749 | orchestrator | Monday 30 March 2026 01:02:20 +0000 (0:00:01.409) 0:00:50.777 ********** 2026-03-30 01:03:34.066758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.066821 | orchestrator | 2026-03-30 01:03:34.066826 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-30 01:03:34.066831 | orchestrator | Monday 30 March 2026 01:02:24 +0000 (0:00:03.866) 0:00:54.644 ********** 2026-03-30 01:03:34.066836 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.066842 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:03:34.066847 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:03:34.066856 | orchestrator | 2026-03-30 01:03:34.066861 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-30 01:03:34.066866 | orchestrator | Monday 30 March 2026 01:02:25 +0000 (0:00:01.629) 0:00:56.274 ********** 2026-03-30 01:03:34.066871 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:03:34.066876 | orchestrator | 2026-03-30 01:03:34.066881 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-30 01:03:34.066886 | orchestrator | Monday 30 March 2026 01:02:26 +0000 (0:00:01.223) 0:00:57.497 ********** 2026-03-30 01:03:34.066891 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.066896 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:03:34.066955 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:03:34.066962 | orchestrator | 2026-03-30 01:03:34.066967 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-30 01:03:34.066972 | orchestrator | Monday 30 March 2026 01:02:27 +0000 (0:00:00.584) 0:00:58.081 ********** 2026-03-30 01:03:34.066978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.066998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.067004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067046 | orchestrator | 2026-03-30 01:03:34.067051 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-30 01:03:34.067057 | orchestrator | Monday 30 March 2026 01:02:38 +0000 (0:00:10.703) 0:01:08.785 ********** 2026-03-30 01:03:34.067066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.067075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.067081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.067086 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.067092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.067100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.067109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.067118 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:03:34.067124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-30 01:03:34.067130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.067136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:03:34.067141 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:03:34.067147 | orchestrator | 2026-03-30 01:03:34.067152 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-30 01:03:34.067158 | orchestrator | Monday 30 March 2026 01:02:38 +0000 (0:00:00.617) 0:01:09.403 ********** 2026-03-30 01:03:34.067167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.067176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.067185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-30 01:03:34.067191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:03:34.067235 | orchestrator | 2026-03-30 01:03:34.067242 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-30 01:03:34.067247 | orchestrator | Monday 30 March 2026 01:02:41 +0000 (0:00:03.092) 0:01:12.495 ********** 2026-03-30 01:03:34.067252 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:03:34.067256 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:03:34.067262 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:03:34.067266 | orchestrator | 2026-03-30 01:03:34.067271 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-30 01:03:34.067275 | orchestrator | Monday 30 March 2026 01:02:42 +0000 (0:00:00.299) 0:01:12.794 ********** 2026-03-30 01:03:34.067280 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.067284 | orchestrator | 2026-03-30 01:03:34.067289 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-30 01:03:34.067293 | orchestrator | Monday 30 March 2026 01:02:44 +0000 (0:00:02.552) 0:01:15.347 ********** 2026-03-30 01:03:34.067298 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.067303 | orchestrator | 2026-03-30 01:03:34.067308 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-30 01:03:34.067313 | orchestrator | Monday 30 March 2026 01:02:47 +0000 (0:00:02.647) 0:01:17.994 ********** 2026-03-30 01:03:34.067318 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.067322 | orchestrator | 2026-03-30 01:03:34.067327 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-30 01:03:34.067332 | orchestrator | Monday 30 March 2026 01:03:00 +0000 (0:00:13.167) 0:01:31.162 ********** 2026-03-30 01:03:34.067337 | orchestrator | 2026-03-30 01:03:34.067342 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-30 01:03:34.067347 | orchestrator | Monday 30 March 2026 01:03:01 +0000 (0:00:00.513) 0:01:31.676 ********** 2026-03-30 01:03:34.067352 | orchestrator | 2026-03-30 01:03:34.067357 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-30 01:03:34.067364 | orchestrator | Monday 30 March 2026 01:03:01 +0000 (0:00:00.178) 0:01:31.854 ********** 2026-03-30 01:03:34.067370 | orchestrator | 2026-03-30 01:03:34.067375 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-30 01:03:34.067380 | orchestrator | Monday 30 March 2026 01:03:01 +0000 (0:00:00.094) 0:01:31.949 ********** 2026-03-30 01:03:34.067385 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.067390 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:03:34.067395 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:03:34.067400 | orchestrator | 2026-03-30 01:03:34.067409 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-30 01:03:34.067414 | orchestrator | Monday 30 March 2026 01:03:09 +0000 (0:00:08.249) 0:01:40.198 ********** 2026-03-30 01:03:34.067419 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.067424 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:03:34.067429 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:03:34.067435 | orchestrator | 2026-03-30 01:03:34.067440 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-30 01:03:34.067456 | orchestrator | Monday 30 March 2026 01:03:17 +0000 (0:00:08.100) 0:01:48.299 ********** 2026-03-30 01:03:34.067462 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:03:34.067467 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:03:34.067475 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:03:34.067481 | orchestrator | 2026-03-30 01:03:34.067487 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:03:34.067492 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:03:34.067499 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-30 01:03:34.067503 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-30 01:03:34.067508 | orchestrator | 2026-03-30 01:03:34.067514 | orchestrator | 2026-03-30 01:03:34.067518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:03:34.067523 | orchestrator | Monday 30 March 2026 01:03:31 +0000 (0:00:13.839) 0:02:02.139 ********** 2026-03-30 01:03:34.067527 | orchestrator | =============================================================================== 2026-03-30 01:03:34.067530 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.83s 2026-03-30 01:03:34.067537 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.84s 2026-03-30 01:03:34.067540 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.17s 2026-03-30 01:03:34.067543 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.70s 2026-03-30 01:03:34.067546 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.25s 2026-03-30 01:03:34.067549 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.10s 2026-03-30 01:03:34.067552 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.10s 2026-03-30 01:03:34.067555 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.20s 2026-03-30 01:03:34.067558 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.12s 2026-03-30 01:03:34.067561 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.04s 2026-03-30 01:03:34.067564 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.87s 2026-03-30 01:03:34.067567 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.66s 2026-03-30 01:03:34.067573 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.57s 2026-03-30 01:03:34.067577 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.09s 2026-03-30 01:03:34.067580 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.65s 2026-03-30 01:03:34.067583 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.55s 2026-03-30 01:03:34.067586 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.37s 2026-03-30 01:03:34.067589 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.63s 2026-03-30 01:03:34.067592 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.41s 2026-03-30 01:03:34.067595 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.22s 2026-03-30 01:03:34.067602 | orchestrator | 2026-03-30 01:03:34 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:34.067606 | orchestrator | 2026-03-30 01:03:34 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:34.070415 | orchestrator | 2026-03-30 01:03:34 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:34.070467 | orchestrator | 2026-03-30 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:37.093077 | orchestrator | 2026-03-30 01:03:37 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:37.093636 | orchestrator | 2026-03-30 01:03:37 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:37.094801 | orchestrator | 2026-03-30 01:03:37 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:37.095600 | orchestrator | 2026-03-30 01:03:37 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:37.095631 | orchestrator | 2026-03-30 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:40.122875 | orchestrator | 2026-03-30 01:03:40 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:40.123415 | orchestrator | 2026-03-30 01:03:40 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:40.124226 | orchestrator | 2026-03-30 01:03:40 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:40.125159 | orchestrator | 2026-03-30 01:03:40 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:40.125176 | orchestrator | 2026-03-30 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:43.169131 | orchestrator | 2026-03-30 01:03:43 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:43.169211 | orchestrator | 2026-03-30 01:03:43 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:43.169601 | orchestrator | 2026-03-30 01:03:43 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:43.170389 | orchestrator | 2026-03-30 01:03:43 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:43.170426 | orchestrator | 2026-03-30 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:46.203399 | orchestrator | 2026-03-30 01:03:46 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:46.203741 | orchestrator | 2026-03-30 01:03:46 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:46.205143 | orchestrator | 2026-03-30 01:03:46 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:46.206451 | orchestrator | 2026-03-30 01:03:46 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:46.206485 | orchestrator | 2026-03-30 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:49.237944 | orchestrator | 2026-03-30 01:03:49 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:49.238508 | orchestrator | 2026-03-30 01:03:49 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:49.238701 | orchestrator | 2026-03-30 01:03:49 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:49.239451 | orchestrator | 2026-03-30 01:03:49 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:49.239493 | orchestrator | 2026-03-30 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:52.275837 | orchestrator | 2026-03-30 01:03:52 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:52.277323 | orchestrator | 2026-03-30 01:03:52 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:52.279327 | orchestrator | 2026-03-30 01:03:52 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:52.281481 | orchestrator | 2026-03-30 01:03:52 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:52.281953 | orchestrator | 2026-03-30 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:55.326722 | orchestrator | 2026-03-30 01:03:55 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:55.326776 | orchestrator | 2026-03-30 01:03:55 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:55.327033 | orchestrator | 2026-03-30 01:03:55 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:55.327744 | orchestrator | 2026-03-30 01:03:55 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:55.327767 | orchestrator | 2026-03-30 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:03:58.366177 | orchestrator | 2026-03-30 01:03:58 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:03:58.367183 | orchestrator | 2026-03-30 01:03:58 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:03:58.367675 | orchestrator | 2026-03-30 01:03:58 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:03:58.368532 | orchestrator | 2026-03-30 01:03:58 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:03:58.368567 | orchestrator | 2026-03-30 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:01.405361 | orchestrator | 2026-03-30 01:04:01 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:01.405420 | orchestrator | 2026-03-30 01:04:01 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:01.405431 | orchestrator | 2026-03-30 01:04:01 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:01.406245 | orchestrator | 2026-03-30 01:04:01 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:01.406282 | orchestrator | 2026-03-30 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:04.440751 | orchestrator | 2026-03-30 01:04:04 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:04.443424 | orchestrator | 2026-03-30 01:04:04 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:04.444694 | orchestrator | 2026-03-30 01:04:04 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:04.445503 | orchestrator | 2026-03-30 01:04:04 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:04.445532 | orchestrator | 2026-03-30 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:07.478890 | orchestrator | 2026-03-30 01:04:07 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:07.480957 | orchestrator | 2026-03-30 01:04:07 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:07.482769 | orchestrator | 2026-03-30 01:04:07 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:07.484496 | orchestrator | 2026-03-30 01:04:07 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:07.484810 | orchestrator | 2026-03-30 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:10.529862 | orchestrator | 2026-03-30 01:04:10 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:10.531173 | orchestrator | 2026-03-30 01:04:10 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:10.533185 | orchestrator | 2026-03-30 01:04:10 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:10.535618 | orchestrator | 2026-03-30 01:04:10 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:10.535749 | orchestrator | 2026-03-30 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:13.567823 | orchestrator | 2026-03-30 01:04:13 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:13.569741 | orchestrator | 2026-03-30 01:04:13 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:13.573185 | orchestrator | 2026-03-30 01:04:13 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:13.574758 | orchestrator | 2026-03-30 01:04:13 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:13.574827 | orchestrator | 2026-03-30 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:16.614294 | orchestrator | 2026-03-30 01:04:16 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:16.615466 | orchestrator | 2026-03-30 01:04:16 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:16.616340 | orchestrator | 2026-03-30 01:04:16 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:16.617275 | orchestrator | 2026-03-30 01:04:16 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:16.617302 | orchestrator | 2026-03-30 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:19.666671 | orchestrator | 2026-03-30 01:04:19 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:19.668912 | orchestrator | 2026-03-30 01:04:19 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:19.670403 | orchestrator | 2026-03-30 01:04:19 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:19.672233 | orchestrator | 2026-03-30 01:04:19 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:19.672387 | orchestrator | 2026-03-30 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:22.721140 | orchestrator | 2026-03-30 01:04:22 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:22.722622 | orchestrator | 2026-03-30 01:04:22 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:22.725712 | orchestrator | 2026-03-30 01:04:22 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:22.726523 | orchestrator | 2026-03-30 01:04:22 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:22.726554 | orchestrator | 2026-03-30 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:25.774672 | orchestrator | 2026-03-30 01:04:25 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:25.776568 | orchestrator | 2026-03-30 01:04:25 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:25.778509 | orchestrator | 2026-03-30 01:04:25 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:25.780732 | orchestrator | 2026-03-30 01:04:25 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:25.780771 | orchestrator | 2026-03-30 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:28.816694 | orchestrator | 2026-03-30 01:04:28 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:28.817485 | orchestrator | 2026-03-30 01:04:28 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:28.818449 | orchestrator | 2026-03-30 01:04:28 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:28.819330 | orchestrator | 2026-03-30 01:04:28 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:28.820672 | orchestrator | 2026-03-30 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:31.855992 | orchestrator | 2026-03-30 01:04:31 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:31.858677 | orchestrator | 2026-03-30 01:04:31 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:31.861161 | orchestrator | 2026-03-30 01:04:31 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:31.864137 | orchestrator | 2026-03-30 01:04:31 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:31.864196 | orchestrator | 2026-03-30 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:34.906739 | orchestrator | 2026-03-30 01:04:34 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:34.906795 | orchestrator | 2026-03-30 01:04:34 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:34.906803 | orchestrator | 2026-03-30 01:04:34 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:34.906809 | orchestrator | 2026-03-30 01:04:34 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:34.906815 | orchestrator | 2026-03-30 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:37.942456 | orchestrator | 2026-03-30 01:04:37 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:37.943013 | orchestrator | 2026-03-30 01:04:37 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:37.943521 | orchestrator | 2026-03-30 01:04:37 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:37.944255 | orchestrator | 2026-03-30 01:04:37 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:37.944275 | orchestrator | 2026-03-30 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:40.996261 | orchestrator | 2026-03-30 01:04:40 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:40.996509 | orchestrator | 2026-03-30 01:04:40 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:40.997585 | orchestrator | 2026-03-30 01:04:41 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:40.998997 | orchestrator | 2026-03-30 01:04:41 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:40.999090 | orchestrator | 2026-03-30 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:44.025095 | orchestrator | 2026-03-30 01:04:44 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:44.025274 | orchestrator | 2026-03-30 01:04:44 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:44.026595 | orchestrator | 2026-03-30 01:04:44 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:44.027186 | orchestrator | 2026-03-30 01:04:44 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:44.027215 | orchestrator | 2026-03-30 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:47.056812 | orchestrator | 2026-03-30 01:04:47 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:47.060612 | orchestrator | 2026-03-30 01:04:47 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:47.062121 | orchestrator | 2026-03-30 01:04:47 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:47.067031 | orchestrator | 2026-03-30 01:04:47 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:47.068571 | orchestrator | 2026-03-30 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:50.095021 | orchestrator | 2026-03-30 01:04:50 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:50.096140 | orchestrator | 2026-03-30 01:04:50 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:50.096957 | orchestrator | 2026-03-30 01:04:50 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:50.097670 | orchestrator | 2026-03-30 01:04:50 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:50.097698 | orchestrator | 2026-03-30 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:53.130440 | orchestrator | 2026-03-30 01:04:53 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:53.132003 | orchestrator | 2026-03-30 01:04:53 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:53.133620 | orchestrator | 2026-03-30 01:04:53 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:53.134815 | orchestrator | 2026-03-30 01:04:53 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:53.134845 | orchestrator | 2026-03-30 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:56.168748 | orchestrator | 2026-03-30 01:04:56 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:56.168794 | orchestrator | 2026-03-30 01:04:56 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:56.169180 | orchestrator | 2026-03-30 01:04:56 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:56.169742 | orchestrator | 2026-03-30 01:04:56 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:56.169755 | orchestrator | 2026-03-30 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:04:59.206672 | orchestrator | 2026-03-30 01:04:59 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:04:59.207496 | orchestrator | 2026-03-30 01:04:59 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:04:59.209779 | orchestrator | 2026-03-30 01:04:59 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:04:59.210546 | orchestrator | 2026-03-30 01:04:59 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:04:59.210568 | orchestrator | 2026-03-30 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:02.237241 | orchestrator | 2026-03-30 01:05:02 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state STARTED 2026-03-30 01:05:02.238106 | orchestrator | 2026-03-30 01:05:02 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:02.238940 | orchestrator | 2026-03-30 01:05:02 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:02.239770 | orchestrator | 2026-03-30 01:05:02 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:02.240014 | orchestrator | 2026-03-30 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:05.265727 | orchestrator | 2026-03-30 01:05:05 | INFO  | Task 7f0e9627-f56c-4c9e-9d26-055a4b715ac8 is in state SUCCESS 2026-03-30 01:05:05.266939 | orchestrator | 2026-03-30 01:05:05.266973 | orchestrator | 2026-03-30 01:05:05.266978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:05:05.266983 | orchestrator | 2026-03-30 01:05:05.266987 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:05:05.266991 | orchestrator | Monday 30 March 2026 01:01:54 +0000 (0:00:00.385) 0:00:00.385 ********** 2026-03-30 01:05:05.266995 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:05:05.267000 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:05:05.267004 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:05:05.267008 | orchestrator | 2026-03-30 01:05:05.267012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:05:05.267015 | orchestrator | Monday 30 March 2026 01:01:54 +0000 (0:00:00.308) 0:00:00.694 ********** 2026-03-30 01:05:05.267020 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-30 01:05:05.267024 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-30 01:05:05.267028 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-30 01:05:05.267032 | orchestrator | 2026-03-30 01:05:05.267036 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-30 01:05:05.267040 | orchestrator | 2026-03-30 01:05:05.267061 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-30 01:05:05.267066 | orchestrator | Monday 30 March 2026 01:01:55 +0000 (0:00:00.317) 0:00:01.011 ********** 2026-03-30 01:05:05.267070 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:05:05.267074 | orchestrator | 2026-03-30 01:05:05.267078 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-30 01:05:05.267082 | orchestrator | Monday 30 March 2026 01:01:55 +0000 (0:00:00.677) 0:00:01.688 ********** 2026-03-30 01:05:05.267085 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-30 01:05:05.267107 | orchestrator | 2026-03-30 01:05:05.267111 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-30 01:05:05.267115 | orchestrator | Monday 30 March 2026 01:01:59 +0000 (0:00:03.827) 0:00:05.516 ********** 2026-03-30 01:05:05.267119 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-30 01:05:05.267123 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-30 01:05:05.267127 | orchestrator | 2026-03-30 01:05:05.267130 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-30 01:05:05.267134 | orchestrator | Monday 30 March 2026 01:02:07 +0000 (0:00:07.220) 0:00:12.737 ********** 2026-03-30 01:05:05.267138 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:05:05.267142 | orchestrator | 2026-03-30 01:05:05.267157 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-30 01:05:05.267161 | orchestrator | Monday 30 March 2026 01:02:10 +0000 (0:00:03.775) 0:00:16.513 ********** 2026-03-30 01:05:05.267165 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-30 01:05:05.267168 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:05:05.267178 | orchestrator | 2026-03-30 01:05:05.267181 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-30 01:05:05.267185 | orchestrator | Monday 30 March 2026 01:02:14 +0000 (0:00:03.984) 0:00:20.497 ********** 2026-03-30 01:05:05.267189 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:05:05.267193 | orchestrator | 2026-03-30 01:05:05.267197 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-30 01:05:05.267201 | orchestrator | Monday 30 March 2026 01:02:18 +0000 (0:00:03.299) 0:00:23.797 ********** 2026-03-30 01:05:05.267204 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-30 01:05:05.267208 | orchestrator | 2026-03-30 01:05:05.267212 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-30 01:05:05.267215 | orchestrator | Monday 30 March 2026 01:02:22 +0000 (0:00:04.470) 0:00:28.267 ********** 2026-03-30 01:05:05.267220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.267238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.267246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.267250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267573 | orchestrator | 2026-03-30 01:05:05.267580 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-30 01:05:05.267586 | orchestrator | Monday 30 March 2026 01:02:27 +0000 (0:00:04.620) 0:00:32.887 ********** 2026-03-30 01:05:05.267593 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.267600 | orchestrator | 2026-03-30 01:05:05.267607 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-30 01:05:05.267611 | orchestrator | Monday 30 March 2026 01:02:27 +0000 (0:00:00.157) 0:00:33.044 ********** 2026-03-30 01:05:05.267615 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.267618 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:05.267622 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:05.267626 | orchestrator | 2026-03-30 01:05:05.267630 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-30 01:05:05.267635 | orchestrator | Monday 30 March 2026 01:02:27 +0000 (0:00:00.250) 0:00:33.295 ********** 2026-03-30 01:05:05.267641 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:05:05.267648 | orchestrator | 2026-03-30 01:05:05.267655 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-30 01:05:05.267661 | orchestrator | Monday 30 March 2026 01:02:28 +0000 (0:00:00.656) 0:00:33.952 ********** 2026-03-30 01:05:05.267668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.267680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.267696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.267703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.267856 | orchestrator | 2026-03-30 01:05:05.267862 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-30 01:05:05.267869 | orchestrator | Monday 30 March 2026 01:02:37 +0000 (0:00:09.403) 0:00:43.355 ********** 2026-03-30 01:05:05.267876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.267894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.267917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.267928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.267935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.267942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.267949 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.267957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.267964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.268386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268428 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:05.268434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.268441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.268459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268490 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:05.268496 | orchestrator | 2026-03-30 01:05:05.268503 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-30 01:05:05.268509 | orchestrator | Monday 30 March 2026 01:02:38 +0000 (0:00:01.302) 0:00:44.657 ********** 2026-03-30 01:05:05.268516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.268523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.268536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268567 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:05.268584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.268591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.268604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268635 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.268642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.268648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.268660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.268693 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:05.268699 | orchestrator | 2026-03-30 01:05:05.268706 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-30 01:05:05.268712 | orchestrator | Monday 30 March 2026 01:02:41 +0000 (0:00:02.184) 0:00:46.842 ********** 2026-03-30 01:05:05.268719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.268730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.268741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.268750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268873 | orchestrator | 2026-03-30 01:05:05.268880 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-30 01:05:05.268886 | orchestrator | Monday 30 March 2026 01:02:48 +0000 (0:00:07.286) 0:00:54.128 ********** 2026-03-30 01:05:05.268893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.268904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.268911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.268920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.268994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269043 | orchestrator | 2026-03-30 01:05:05.269052 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-30 01:05:05.269059 | orchestrator | Monday 30 March 2026 01:03:10 +0000 (0:00:22.033) 0:01:16.162 ********** 2026-03-30 01:05:05.269066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-30 01:05:05.269073 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-30 01:05:05.269080 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-30 01:05:05.269086 | orchestrator | 2026-03-30 01:05:05.269097 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-30 01:05:05.269103 | orchestrator | Monday 30 March 2026 01:03:17 +0000 (0:00:06.853) 0:01:23.015 ********** 2026-03-30 01:05:05.269110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-30 01:05:05.269116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-30 01:05:05.269123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-30 01:05:05.269129 | orchestrator | 2026-03-30 01:05:05.269136 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-30 01:05:05.269142 | orchestrator | Monday 30 March 2026 01:03:22 +0000 (0:00:05.018) 0:01:28.034 ********** 2026-03-30 01:05:05.269149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269288 | orchestrator | 2026-03-30 01:05:05.269293 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-30 01:05:05.269298 | orchestrator | Monday 30 March 2026 01:03:25 +0000 (0:00:02.841) 0:01:30.876 ********** 2026-03-30 01:05:05.269302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269514 | orchestrator | 2026-03-30 01:05:05.269518 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-30 01:05:05.269521 | orchestrator | Monday 30 March 2026 01:03:28 +0000 (0:00:03.375) 0:01:34.251 ********** 2026-03-30 01:05:05.269525 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.269528 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:05.269531 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:05.269534 | orchestrator | 2026-03-30 01:05:05.269538 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-30 01:05:05.269541 | orchestrator | Monday 30 March 2026 01:03:28 +0000 (0:00:00.299) 0:01:34.550 ********** 2026-03-30 01:05:05.269544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.269571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269598 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:05.269607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.269614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269634 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.269638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-30 01:05:05.269641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-30 01:05:05.269645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:05:05.269663 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:05.269666 | orchestrator | 2026-03-30 01:05:05.269670 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-30 01:05:05.269675 | orchestrator | Monday 30 March 2026 01:03:30 +0000 (0:00:01.798) 0:01:36.349 ********** 2026-03-30 01:05:05.269685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.269691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.269696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-30 01:05:05.269708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:05:05.269818 | orchestrator | 2026-03-30 01:05:05.269824 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-30 01:05:05.269830 | orchestrator | Monday 30 March 2026 01:03:36 +0000 (0:00:06.178) 0:01:42.527 ********** 2026-03-30 01:05:05.269836 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:05.269842 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:05.269847 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:05.269853 | orchestrator | 2026-03-30 01:05:05.269859 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-30 01:05:05.269865 | orchestrator | Monday 30 March 2026 01:03:37 +0000 (0:00:00.808) 0:01:43.336 ********** 2026-03-30 01:05:05.269871 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-30 01:05:05.269877 | orchestrator | 2026-03-30 01:05:05.269883 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-30 01:05:05.269891 | orchestrator | Monday 30 March 2026 01:03:39 +0000 (0:00:02.297) 0:01:45.633 ********** 2026-03-30 01:05:05.269895 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 01:05:05.269898 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-30 01:05:05.269902 | orchestrator | 2026-03-30 01:05:05.269905 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-30 01:05:05.269909 | orchestrator | Monday 30 March 2026 01:03:42 +0000 (0:00:02.339) 0:01:47.973 ********** 2026-03-30 01:05:05.269912 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.269915 | orchestrator | 2026-03-30 01:05:05.269919 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-30 01:05:05.269922 | orchestrator | Monday 30 March 2026 01:03:57 +0000 (0:00:15.287) 0:02:03.260 ********** 2026-03-30 01:05:05.269926 | orchestrator | 2026-03-30 01:05:05.269929 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-30 01:05:05.269932 | orchestrator | Monday 30 March 2026 01:03:57 +0000 (0:00:00.072) 0:02:03.333 ********** 2026-03-30 01:05:05.269936 | orchestrator | 2026-03-30 01:05:05.269939 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-30 01:05:05.269942 | orchestrator | Monday 30 March 2026 01:03:57 +0000 (0:00:00.065) 0:02:03.399 ********** 2026-03-30 01:05:05.269950 | orchestrator | 2026-03-30 01:05:05.269954 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-30 01:05:05.269957 | orchestrator | Monday 30 March 2026 01:03:57 +0000 (0:00:00.068) 0:02:03.467 ********** 2026-03-30 01:05:05.269961 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.269964 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:05.269967 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:05.269970 | orchestrator | 2026-03-30 01:05:05.269974 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-30 01:05:05.269978 | orchestrator | Monday 30 March 2026 01:04:10 +0000 (0:00:12.521) 0:02:15.989 ********** 2026-03-30 01:05:05.269981 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.269985 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:05.269989 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:05.269992 | orchestrator | 2026-03-30 01:05:05.269995 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-30 01:05:05.269999 | orchestrator | Monday 30 March 2026 01:04:21 +0000 (0:00:11.146) 0:02:27.135 ********** 2026-03-30 01:05:05.270002 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.270005 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:05.270008 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:05.270040 | orchestrator | 2026-03-30 01:05:05.270047 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-30 01:05:05.270053 | orchestrator | Monday 30 March 2026 01:04:27 +0000 (0:00:05.923) 0:02:33.059 ********** 2026-03-30 01:05:05.270059 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:05.270068 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.270074 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:05.270085 | orchestrator | 2026-03-30 01:05:05.270092 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-30 01:05:05.270098 | orchestrator | Monday 30 March 2026 01:04:38 +0000 (0:00:11.034) 0:02:44.093 ********** 2026-03-30 01:05:05.270104 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.270112 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:05.270116 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:05.270120 | orchestrator | 2026-03-30 01:05:05.270124 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-30 01:05:05.270129 | orchestrator | Monday 30 March 2026 01:04:47 +0000 (0:00:09.310) 0:02:53.404 ********** 2026-03-30 01:05:05.270132 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.270138 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:05.270143 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:05.270148 | orchestrator | 2026-03-30 01:05:05.270155 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-30 01:05:05.270161 | orchestrator | Monday 30 March 2026 01:04:55 +0000 (0:00:07.858) 0:03:01.263 ********** 2026-03-30 01:05:05.270167 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:05.270174 | orchestrator | 2026-03-30 01:05:05.270182 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:05:05.270189 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:05:05.270196 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-30 01:05:05.270202 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-30 01:05:05.270207 | orchestrator | 2026-03-30 01:05:05.270213 | orchestrator | 2026-03-30 01:05:05.270228 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:05:05.270233 | orchestrator | Monday 30 March 2026 01:05:02 +0000 (0:00:06.816) 0:03:08.080 ********** 2026-03-30 01:05:05.270237 | orchestrator | =============================================================================== 2026-03-30 01:05:05.270248 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.03s 2026-03-30 01:05:05.270252 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.29s 2026-03-30 01:05:05.270256 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.52s 2026-03-30 01:05:05.270276 | orchestrator | designate : Restart designate-api container ---------------------------- 11.15s 2026-03-30 01:05:05.270280 | orchestrator | designate : Restart designate-producer container ----------------------- 11.03s 2026-03-30 01:05:05.270284 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 9.40s 2026-03-30 01:05:05.270288 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.31s 2026-03-30 01:05:05.270297 | orchestrator | designate : Restart designate-worker container -------------------------- 7.86s 2026-03-30 01:05:05.270301 | orchestrator | designate : Copying over config.json files for services ----------------- 7.29s 2026-03-30 01:05:05.270305 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.22s 2026-03-30 01:05:05.270309 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.85s 2026-03-30 01:05:05.270313 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.82s 2026-03-30 01:05:05.270317 | orchestrator | designate : Check designate containers ---------------------------------- 6.18s 2026-03-30 01:05:05.270321 | orchestrator | designate : Restart designate-central container ------------------------- 5.92s 2026-03-30 01:05:05.270325 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.02s 2026-03-30 01:05:05.270329 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.62s 2026-03-30 01:05:05.270333 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.47s 2026-03-30 01:05:05.270337 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.98s 2026-03-30 01:05:05.270341 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.83s 2026-03-30 01:05:05.270345 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.78s 2026-03-30 01:05:05.270349 | orchestrator | 2026-03-30 01:05:05 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:05.270702 | orchestrator | 2026-03-30 01:05:05 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:05.271341 | orchestrator | 2026-03-30 01:05:05 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:05.272320 | orchestrator | 2026-03-30 01:05:05 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:05.272348 | orchestrator | 2026-03-30 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:08.323139 | orchestrator | 2026-03-30 01:05:08 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:08.323615 | orchestrator | 2026-03-30 01:05:08 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:08.324480 | orchestrator | 2026-03-30 01:05:08 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:08.325204 | orchestrator | 2026-03-30 01:05:08 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:08.325237 | orchestrator | 2026-03-30 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:11.369754 | orchestrator | 2026-03-30 01:05:11 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:11.370467 | orchestrator | 2026-03-30 01:05:11 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:11.371074 | orchestrator | 2026-03-30 01:05:11 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:11.371690 | orchestrator | 2026-03-30 01:05:11 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:11.371723 | orchestrator | 2026-03-30 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:14.412300 | orchestrator | 2026-03-30 01:05:14 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:14.417422 | orchestrator | 2026-03-30 01:05:14 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:14.419454 | orchestrator | 2026-03-30 01:05:14 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:14.422985 | orchestrator | 2026-03-30 01:05:14 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:14.423821 | orchestrator | 2026-03-30 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:17.480555 | orchestrator | 2026-03-30 01:05:17 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:17.482059 | orchestrator | 2026-03-30 01:05:17 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:17.483359 | orchestrator | 2026-03-30 01:05:17 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:17.486243 | orchestrator | 2026-03-30 01:05:17 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:17.486289 | orchestrator | 2026-03-30 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:20.529534 | orchestrator | 2026-03-30 01:05:20 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:20.530424 | orchestrator | 2026-03-30 01:05:20 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:20.533344 | orchestrator | 2026-03-30 01:05:20 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:20.535244 | orchestrator | 2026-03-30 01:05:20 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:20.535429 | orchestrator | 2026-03-30 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:23.589517 | orchestrator | 2026-03-30 01:05:23 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:23.591896 | orchestrator | 2026-03-30 01:05:23 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:23.594141 | orchestrator | 2026-03-30 01:05:23 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:23.595845 | orchestrator | 2026-03-30 01:05:23 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:23.595961 | orchestrator | 2026-03-30 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:26.640161 | orchestrator | 2026-03-30 01:05:26 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:26.644310 | orchestrator | 2026-03-30 01:05:26 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:26.646451 | orchestrator | 2026-03-30 01:05:26 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:26.648034 | orchestrator | 2026-03-30 01:05:26 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state STARTED 2026-03-30 01:05:26.648181 | orchestrator | 2026-03-30 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:29.688342 | orchestrator | 2026-03-30 01:05:29 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:29.690203 | orchestrator | 2026-03-30 01:05:29 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:29.691628 | orchestrator | 2026-03-30 01:05:29 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:29.692302 | orchestrator | 2026-03-30 01:05:29 | INFO  | Task 1da7706f-0029-4fc2-b141-29889d4da1ef is in state SUCCESS 2026-03-30 01:05:29.692448 | orchestrator | 2026-03-30 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:32.733442 | orchestrator | 2026-03-30 01:05:32 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:32.733555 | orchestrator | 2026-03-30 01:05:32 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:32.734272 | orchestrator | 2026-03-30 01:05:32 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:32.734965 | orchestrator | 2026-03-30 01:05:32 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:32.734990 | orchestrator | 2026-03-30 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:35.761479 | orchestrator | 2026-03-30 01:05:35 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:35.764329 | orchestrator | 2026-03-30 01:05:35 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:35.765074 | orchestrator | 2026-03-30 01:05:35 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:35.766489 | orchestrator | 2026-03-30 01:05:35 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:35.766527 | orchestrator | 2026-03-30 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:38.797957 | orchestrator | 2026-03-30 01:05:38 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:38.799572 | orchestrator | 2026-03-30 01:05:38 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:38.800997 | orchestrator | 2026-03-30 01:05:38 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:38.802163 | orchestrator | 2026-03-30 01:05:38 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:38.802586 | orchestrator | 2026-03-30 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:41.838965 | orchestrator | 2026-03-30 01:05:41 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:41.839014 | orchestrator | 2026-03-30 01:05:41 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:41.839764 | orchestrator | 2026-03-30 01:05:41 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:41.842796 | orchestrator | 2026-03-30 01:05:41 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:41.842936 | orchestrator | 2026-03-30 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:44.888800 | orchestrator | 2026-03-30 01:05:44 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:44.889827 | orchestrator | 2026-03-30 01:05:44 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:44.892823 | orchestrator | 2026-03-30 01:05:44 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:44.894103 | orchestrator | 2026-03-30 01:05:44 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:44.894141 | orchestrator | 2026-03-30 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:47.937585 | orchestrator | 2026-03-30 01:05:47 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:47.939977 | orchestrator | 2026-03-30 01:05:47 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:47.942650 | orchestrator | 2026-03-30 01:05:47 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:47.944824 | orchestrator | 2026-03-30 01:05:47 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:47.944867 | orchestrator | 2026-03-30 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:50.997537 | orchestrator | 2026-03-30 01:05:50 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:51.000721 | orchestrator | 2026-03-30 01:05:51 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:51.002757 | orchestrator | 2026-03-30 01:05:51 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:51.005899 | orchestrator | 2026-03-30 01:05:51 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:51.006630 | orchestrator | 2026-03-30 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:54.047937 | orchestrator | 2026-03-30 01:05:54 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:54.049564 | orchestrator | 2026-03-30 01:05:54 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:54.051477 | orchestrator | 2026-03-30 01:05:54 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state STARTED 2026-03-30 01:05:54.053153 | orchestrator | 2026-03-30 01:05:54 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:54.053259 | orchestrator | 2026-03-30 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:05:57.092400 | orchestrator | 2026-03-30 01:05:57 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:05:57.093220 | orchestrator | 2026-03-30 01:05:57 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:05:57.094294 | orchestrator | 2026-03-30 01:05:57 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:05:57.097956 | orchestrator | 2026-03-30 01:05:57 | INFO  | Task 41419555-dc9e-4abb-bc09-8fca8531878e is in state SUCCESS 2026-03-30 01:05:57.100303 | orchestrator | 2026-03-30 01:05:57.100341 | orchestrator | 2026-03-30 01:05:57.100346 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-30 01:05:57.100349 | orchestrator | 2026-03-30 01:05:57.100353 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-30 01:05:57.100357 | orchestrator | Monday 30 March 2026 01:03:38 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-30 01:05:57.100360 | orchestrator | changed: [localhost] 2026-03-30 01:05:57.100364 | orchestrator | 2026-03-30 01:05:57.100368 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-30 01:05:57.100371 | orchestrator | Monday 30 March 2026 01:03:39 +0000 (0:00:01.763) 0:00:01.925 ********** 2026-03-30 01:05:57.100374 | orchestrator | changed: [localhost] 2026-03-30 01:05:57.100377 | orchestrator | 2026-03-30 01:05:57.100381 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-30 01:05:57.100384 | orchestrator | Monday 30 March 2026 01:04:13 +0000 (0:00:33.906) 0:00:35.832 ********** 2026-03-30 01:05:57.100387 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-30 01:05:57.100391 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-30 01:05:57.100394 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2026-03-30 01:05:57.100417 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.kernel", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.kernel.sha256"} 2026-03-30 01:05:57.100422 | orchestrator | 2026-03-30 01:05:57.100425 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:05:57.100429 | orchestrator | localhost : ok=2  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-03-30 01:05:57.100433 | orchestrator | 2026-03-30 01:05:57.100436 | orchestrator | 2026-03-30 01:05:57.100439 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:05:57.100442 | orchestrator | Monday 30 March 2026 01:05:29 +0000 (0:01:15.678) 0:01:51.510 ********** 2026-03-30 01:05:57.100445 | orchestrator | =============================================================================== 2026-03-30 01:05:57.100448 | orchestrator | Download ironic-agent kernel ------------------------------------------- 75.68s 2026-03-30 01:05:57.100452 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.91s 2026-03-30 01:05:57.100455 | orchestrator | Ensure the destination directory exists --------------------------------- 1.76s 2026-03-30 01:05:57.100458 | orchestrator | 2026-03-30 01:05:57.100462 | orchestrator | 2026-03-30 01:05:57.100467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:05:57.100473 | orchestrator | 2026-03-30 01:05:57.100478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:05:57.100483 | orchestrator | Monday 30 March 2026 01:01:24 +0000 (0:00:00.325) 0:00:00.326 ********** 2026-03-30 01:05:57.100488 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:05:57.100491 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:05:57.100494 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:05:57.100497 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:05:57.100500 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:05:57.100503 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:05:57.100506 | orchestrator | 2026-03-30 01:05:57.100510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:05:57.100513 | orchestrator | Monday 30 March 2026 01:01:24 +0000 (0:00:00.587) 0:00:00.913 ********** 2026-03-30 01:05:57.100518 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-30 01:05:57.100525 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-30 01:05:57.100533 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-30 01:05:57.100538 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-30 01:05:57.100543 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-30 01:05:57.100548 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-30 01:05:57.100552 | orchestrator | 2026-03-30 01:05:57.100557 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-30 01:05:57.100562 | orchestrator | 2026-03-30 01:05:57.100566 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-30 01:05:57.100572 | orchestrator | Monday 30 March 2026 01:01:25 +0000 (0:00:00.708) 0:00:01.622 ********** 2026-03-30 01:05:57.100576 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:05:57.100580 | orchestrator | 2026-03-30 01:05:57.100585 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-30 01:05:57.100590 | orchestrator | Monday 30 March 2026 01:01:26 +0000 (0:00:01.095) 0:00:02.718 ********** 2026-03-30 01:05:57.100594 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:05:57.100599 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:05:57.100604 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:05:57.100610 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:05:57.100619 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:05:57.100624 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:05:57.100629 | orchestrator | 2026-03-30 01:05:57.100635 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-30 01:05:57.100640 | orchestrator | Monday 30 March 2026 01:01:27 +0000 (0:00:01.420) 0:00:04.138 ********** 2026-03-30 01:05:57.100645 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:05:57.100650 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:05:57.100655 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:05:57.100660 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:05:57.100665 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:05:57.100670 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:05:57.100673 | orchestrator | 2026-03-30 01:05:57.100684 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-30 01:05:57.100687 | orchestrator | Monday 30 March 2026 01:01:29 +0000 (0:00:01.108) 0:00:05.247 ********** 2026-03-30 01:05:57.100690 | orchestrator | ok: [testbed-node-0] => { 2026-03-30 01:05:57.100694 | orchestrator |  "changed": false, 2026-03-30 01:05:57.100697 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:05:57.100700 | orchestrator | } 2026-03-30 01:05:57.100743 | orchestrator | ok: [testbed-node-1] => { 2026-03-30 01:05:57.100746 | orchestrator |  "changed": false, 2026-03-30 01:05:57.100750 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:05:57.100753 | orchestrator | } 2026-03-30 01:05:57.100756 | orchestrator | ok: [testbed-node-2] => { 2026-03-30 01:05:57.100759 | orchestrator |  "changed": false, 2026-03-30 01:05:57.100762 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:05:57.100765 | orchestrator | } 2026-03-30 01:05:57.100768 | orchestrator | ok: [testbed-node-3] => { 2026-03-30 01:05:57.100771 | orchestrator |  "changed": false, 2026-03-30 01:05:57.100774 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:05:57.100777 | orchestrator | } 2026-03-30 01:05:57.100780 | orchestrator | ok: [testbed-node-4] => { 2026-03-30 01:05:57.100785 | orchestrator |  "changed": false, 2026-03-30 01:05:57.100791 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:05:57.100795 | orchestrator | } 2026-03-30 01:05:57.100798 | orchestrator | ok: [testbed-node-5] => { 2026-03-30 01:05:57.100801 | orchestrator |  "changed": false, 2026-03-30 01:05:57.100817 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:05:57.100821 | orchestrator | } 2026-03-30 01:05:57.100824 | orchestrator | 2026-03-30 01:05:57.100827 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-30 01:05:57.100836 | orchestrator | Monday 30 March 2026 01:01:29 +0000 (0:00:00.627) 0:00:05.875 ********** 2026-03-30 01:05:57.100840 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.100843 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.100846 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.100852 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.100855 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.100858 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.100861 | orchestrator | 2026-03-30 01:05:57.100864 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-30 01:05:57.100867 | orchestrator | Monday 30 March 2026 01:01:30 +0000 (0:00:00.720) 0:00:06.595 ********** 2026-03-30 01:05:57.100870 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-30 01:05:57.100874 | orchestrator | 2026-03-30 01:05:57.100877 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-30 01:05:57.100880 | orchestrator | Monday 30 March 2026 01:01:33 +0000 (0:00:03.131) 0:00:09.726 ********** 2026-03-30 01:05:57.100883 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-30 01:05:57.100887 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-30 01:05:57.100890 | orchestrator | 2026-03-30 01:05:57.100894 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-30 01:05:57.100900 | orchestrator | Monday 30 March 2026 01:01:40 +0000 (0:00:07.282) 0:00:17.009 ********** 2026-03-30 01:05:57.100904 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:05:57.100908 | orchestrator | 2026-03-30 01:05:57.100919 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-30 01:05:57.100923 | orchestrator | Monday 30 March 2026 01:01:44 +0000 (0:00:03.499) 0:00:20.508 ********** 2026-03-30 01:05:57.100926 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-30 01:05:57.100933 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:05:57.100937 | orchestrator | 2026-03-30 01:05:57.100941 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-30 01:05:57.100946 | orchestrator | Monday 30 March 2026 01:01:48 +0000 (0:00:04.297) 0:00:24.805 ********** 2026-03-30 01:05:57.100951 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:05:57.100985 | orchestrator | 2026-03-30 01:05:57.100992 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-30 01:05:57.100997 | orchestrator | Monday 30 March 2026 01:01:51 +0000 (0:00:03.244) 0:00:28.050 ********** 2026-03-30 01:05:57.101002 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-30 01:05:57.101008 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-30 01:05:57.101014 | orchestrator | 2026-03-30 01:05:57.101019 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-30 01:05:57.101025 | orchestrator | Monday 30 March 2026 01:01:58 +0000 (0:00:07.050) 0:00:35.101 ********** 2026-03-30 01:05:57.101030 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101036 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101041 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101046 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101051 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101056 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101062 | orchestrator | 2026-03-30 01:05:57.101067 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-30 01:05:57.101073 | orchestrator | Monday 30 March 2026 01:01:59 +0000 (0:00:00.599) 0:00:35.700 ********** 2026-03-30 01:05:57.101079 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101084 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101090 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101094 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101098 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101102 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101109 | orchestrator | 2026-03-30 01:05:57.101114 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-30 01:05:57.101119 | orchestrator | Monday 30 March 2026 01:02:01 +0000 (0:00:02.362) 0:00:38.063 ********** 2026-03-30 01:05:57.101124 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:05:57.101128 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:05:57.101133 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:05:57.101137 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:05:57.101142 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:05:57.101172 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:05:57.101178 | orchestrator | 2026-03-30 01:05:57.101191 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-30 01:05:57.101197 | orchestrator | Monday 30 March 2026 01:02:03 +0000 (0:00:01.131) 0:00:39.194 ********** 2026-03-30 01:05:57.101203 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101208 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101213 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101218 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101224 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101229 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101234 | orchestrator | 2026-03-30 01:05:57.101240 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-30 01:05:57.101253 | orchestrator | Monday 30 March 2026 01:02:05 +0000 (0:00:02.338) 0:00:41.532 ********** 2026-03-30 01:05:57.101264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101327 | orchestrator | 2026-03-30 01:05:57.101330 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-30 01:05:57.101333 | orchestrator | Monday 30 March 2026 01:02:07 +0000 (0:00:02.499) 0:00:44.032 ********** 2026-03-30 01:05:57.101337 | orchestrator | [WARNING]: Skipped 2026-03-30 01:05:57.101344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-30 01:05:57.101369 | orchestrator | due to this access issue: 2026-03-30 01:05:57.101372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-30 01:05:57.101375 | orchestrator | a directory 2026-03-30 01:05:57.101381 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:05:57.101385 | orchestrator | 2026-03-30 01:05:57.101388 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-30 01:05:57.101392 | orchestrator | Monday 30 March 2026 01:02:08 +0000 (0:00:00.843) 0:00:44.875 ********** 2026-03-30 01:05:57.101395 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:05:57.101402 | orchestrator | 2026-03-30 01:05:57.101408 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-30 01:05:57.101413 | orchestrator | Monday 30 March 2026 01:02:09 +0000 (0:00:01.033) 0:00:45.908 ********** 2026-03-30 01:05:57.101418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101493 | orchestrator | 2026-03-30 01:05:57.101499 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-30 01:05:57.101502 | orchestrator | Monday 30 March 2026 01:02:12 +0000 (0:00:02.897) 0:00:48.805 ********** 2026-03-30 01:05:57.101509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101516 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101525 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101532 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101538 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101547 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101557 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101560 | orchestrator | 2026-03-30 01:05:57.101563 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-30 01:05:57.101567 | orchestrator | Monday 30 March 2026 01:02:15 +0000 (0:00:02.543) 0:00:51.349 ********** 2026-03-30 01:05:57.101572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101575 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101582 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101591 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101598 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101607 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101628 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101631 | orchestrator | 2026-03-30 01:05:57.101643 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-30 01:05:57.101647 | orchestrator | Monday 30 March 2026 01:02:18 +0000 (0:00:03.054) 0:00:54.403 ********** 2026-03-30 01:05:57.101650 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101659 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101665 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101668 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101672 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101675 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101678 | orchestrator | 2026-03-30 01:05:57.101681 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-30 01:05:57.101684 | orchestrator | Monday 30 March 2026 01:02:20 +0000 (0:00:02.509) 0:00:56.912 ********** 2026-03-30 01:05:57.101687 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101691 | orchestrator | 2026-03-30 01:05:57.101694 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-30 01:05:57.101697 | orchestrator | Monday 30 March 2026 01:02:20 +0000 (0:00:00.242) 0:00:57.155 ********** 2026-03-30 01:05:57.101700 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101704 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101710 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101713 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101716 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101720 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101723 | orchestrator | 2026-03-30 01:05:57.101726 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-30 01:05:57.101729 | orchestrator | Monday 30 March 2026 01:02:21 +0000 (0:00:00.505) 0:00:57.660 ********** 2026-03-30 01:05:57.101733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101736 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101747 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101756 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101764 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101771 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101777 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101780 | orchestrator | 2026-03-30 01:05:57.101784 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-30 01:05:57.101787 | orchestrator | Monday 30 March 2026 01:02:24 +0000 (0:00:03.179) 0:01:00.839 ********** 2026-03-30 01:05:57.101793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101821 | orchestrator | 2026-03-30 01:05:57.101825 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-30 01:05:57.101828 | orchestrator | Monday 30 March 2026 01:02:27 +0000 (0:00:03.095) 0:01:03.935 ********** 2026-03-30 01:05:57.101833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.101857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.101863 | orchestrator | 2026-03-30 01:05:57.101866 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-30 01:05:57.101869 | orchestrator | Monday 30 March 2026 01:02:34 +0000 (0:00:07.150) 0:01:11.085 ********** 2026-03-30 01:05:57.101872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101903 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.101908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101912 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.101922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.101927 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.101936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101945 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101955 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.101966 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.101972 | orchestrator | 2026-03-30 01:05:57.101977 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-30 01:05:57.101982 | orchestrator | Monday 30 March 2026 01:02:38 +0000 (0:00:03.833) 0:01:14.919 ********** 2026-03-30 01:05:57.101988 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.101994 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.101999 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:57.102004 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102009 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:57.102038 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:57.102041 | orchestrator | 2026-03-30 01:05:57.102044 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-30 01:05:57.102048 | orchestrator | Monday 30 March 2026 01:02:42 +0000 (0:00:03.610) 0:01:18.529 ********** 2026-03-30 01:05:57.102051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102055 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102069 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102078 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.102085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.102090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.102094 | orchestrator | 2026-03-30 01:05:57.102099 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-30 01:05:57.102102 | orchestrator | Monday 30 March 2026 01:02:46 +0000 (0:00:03.894) 0:01:22.424 ********** 2026-03-30 01:05:57.102105 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102108 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102113 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102118 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102123 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102128 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102133 | orchestrator | 2026-03-30 01:05:57.102138 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-30 01:05:57.102143 | orchestrator | Monday 30 March 2026 01:02:49 +0000 (0:00:02.888) 0:01:25.312 ********** 2026-03-30 01:05:57.102148 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102186 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102192 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102197 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102203 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102208 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102213 | orchestrator | 2026-03-30 01:05:57.102218 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-30 01:05:57.102221 | orchestrator | Monday 30 March 2026 01:02:52 +0000 (0:00:03.151) 0:01:28.464 ********** 2026-03-30 01:05:57.102227 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102230 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102233 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102236 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102239 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102242 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102245 | orchestrator | 2026-03-30 01:05:57.102248 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-30 01:05:57.102251 | orchestrator | Monday 30 March 2026 01:02:55 +0000 (0:00:03.218) 0:01:31.682 ********** 2026-03-30 01:05:57.102254 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102257 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102260 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102263 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102266 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102269 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102272 | orchestrator | 2026-03-30 01:05:57.102275 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-30 01:05:57.102278 | orchestrator | Monday 30 March 2026 01:02:58 +0000 (0:00:02.963) 0:01:34.646 ********** 2026-03-30 01:05:57.102281 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102284 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102287 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102291 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102294 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102297 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102300 | orchestrator | 2026-03-30 01:05:57.102303 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-30 01:05:57.102306 | orchestrator | Monday 30 March 2026 01:03:02 +0000 (0:00:04.336) 0:01:38.983 ********** 2026-03-30 01:05:57.102309 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102312 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102315 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102318 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102321 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102324 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102327 | orchestrator | 2026-03-30 01:05:57.102330 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-30 01:05:57.102333 | orchestrator | Monday 30 March 2026 01:03:06 +0000 (0:00:03.611) 0:01:42.595 ********** 2026-03-30 01:05:57.102340 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-30 01:05:57.102343 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102346 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-30 01:05:57.102349 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102352 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-30 01:05:57.102355 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102358 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-30 01:05:57.102361 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102364 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-30 01:05:57.102367 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102371 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-30 01:05:57.102374 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102377 | orchestrator | 2026-03-30 01:05:57.102380 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-30 01:05:57.102383 | orchestrator | Monday 30 March 2026 01:03:09 +0000 (0:00:03.030) 0:01:45.625 ********** 2026-03-30 01:05:57.102390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102393 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102402 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102410 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102417 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102423 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102433 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102436 | orchestrator | 2026-03-30 01:05:57.102439 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-30 01:05:57.102442 | orchestrator | Monday 30 March 2026 01:03:13 +0000 (0:00:04.446) 0:01:50.072 ********** 2026-03-30 01:05:57.102447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102451 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102459 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102466 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102475 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102481 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102495 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102498 | orchestrator | 2026-03-30 01:05:57.102501 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-30 01:05:57.102504 | orchestrator | Monday 30 March 2026 01:03:16 +0000 (0:00:02.552) 0:01:52.625 ********** 2026-03-30 01:05:57.102527 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102530 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102533 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102536 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102539 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102543 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102546 | orchestrator | 2026-03-30 01:05:57.102549 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-30 01:05:57.102552 | orchestrator | Monday 30 March 2026 01:03:19 +0000 (0:00:03.266) 0:01:55.891 ********** 2026-03-30 01:05:57.102555 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102558 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102561 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102564 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:05:57.102567 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:05:57.102570 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:05:57.102573 | orchestrator | 2026-03-30 01:05:57.102576 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-30 01:05:57.102579 | orchestrator | Monday 30 March 2026 01:03:24 +0000 (0:00:04.677) 0:02:00.569 ********** 2026-03-30 01:05:57.102582 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102585 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102588 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102592 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102595 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102598 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102601 | orchestrator | 2026-03-30 01:05:57.102604 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-30 01:05:57.102607 | orchestrator | Monday 30 March 2026 01:03:26 +0000 (0:00:02.162) 0:02:02.732 ********** 2026-03-30 01:05:57.102610 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102613 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102616 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102619 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102622 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102626 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102629 | orchestrator | 2026-03-30 01:05:57.102632 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-30 01:05:57.102635 | orchestrator | Monday 30 March 2026 01:03:28 +0000 (0:00:02.167) 0:02:04.899 ********** 2026-03-30 01:05:57.102638 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102641 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102644 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102647 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102650 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102653 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102656 | orchestrator | 2026-03-30 01:05:57.102659 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-30 01:05:57.102663 | orchestrator | Monday 30 March 2026 01:03:32 +0000 (0:00:04.173) 0:02:09.072 ********** 2026-03-30 01:05:57.102666 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102669 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102672 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102677 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102681 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102684 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102687 | orchestrator | 2026-03-30 01:05:57.102692 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-30 01:05:57.102696 | orchestrator | Monday 30 March 2026 01:03:35 +0000 (0:00:02.779) 0:02:11.851 ********** 2026-03-30 01:05:57.102699 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102702 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102705 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102708 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102711 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102714 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102717 | orchestrator | 2026-03-30 01:05:57.102722 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-30 01:05:57.102727 | orchestrator | Monday 30 March 2026 01:03:38 +0000 (0:00:02.328) 0:02:14.179 ********** 2026-03-30 01:05:57.102734 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102742 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102747 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102751 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102756 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102762 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102767 | orchestrator | 2026-03-30 01:05:57.102772 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-30 01:05:57.102777 | orchestrator | Monday 30 March 2026 01:03:40 +0000 (0:00:02.537) 0:02:16.717 ********** 2026-03-30 01:05:57.102781 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102786 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102794 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102798 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102803 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102808 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102843 | orchestrator | 2026-03-30 01:05:57.102848 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-30 01:05:57.102851 | orchestrator | Monday 30 March 2026 01:03:43 +0000 (0:00:02.596) 0:02:19.314 ********** 2026-03-30 01:05:57.102854 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-30 01:05:57.102857 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102861 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-30 01:05:57.102864 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102867 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-30 01:05:57.102870 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102873 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-30 01:05:57.102877 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102880 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-30 01:05:57.102884 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102887 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-30 01:05:57.102890 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102893 | orchestrator | 2026-03-30 01:05:57.102896 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-30 01:05:57.102900 | orchestrator | Monday 30 March 2026 01:03:45 +0000 (0:00:02.132) 0:02:21.447 ********** 2026-03-30 01:05:57.102904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102913 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.102920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102925 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.102933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-30 01:05:57.102942 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.102948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102953 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.102958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102968 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.102973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-30 01:05:57.102978 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.102983 | orchestrator | 2026-03-30 01:05:57.102988 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-30 01:05:57.102993 | orchestrator | Monday 30 March 2026 01:03:47 +0000 (0:00:02.354) 0:02:23.801 ********** 2026-03-30 01:05:57.103003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.103011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.103017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.103023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-30 01:05:57.103031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.103038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-30 01:05:57.103041 | orchestrator | 2026-03-30 01:05:57.103045 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-30 01:05:57.103048 | orchestrator | Monday 30 March 2026 01:03:50 +0000 (0:00:02.683) 0:02:26.485 ********** 2026-03-30 01:05:57.103051 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:05:57.103054 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:05:57.103058 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:05:57.103061 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:05:57.103064 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:05:57.103067 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:05:57.103070 | orchestrator | 2026-03-30 01:05:57.103073 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-30 01:05:57.103076 | orchestrator | Monday 30 March 2026 01:03:51 +0000 (0:00:00.704) 0:02:27.189 ********** 2026-03-30 01:05:57.103079 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:57.103082 | orchestrator | 2026-03-30 01:05:57.103087 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-30 01:05:57.103092 | orchestrator | Monday 30 March 2026 01:03:53 +0000 (0:00:02.300) 0:02:29.490 ********** 2026-03-30 01:05:57.103099 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:57.103105 | orchestrator | 2026-03-30 01:05:57.103110 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-30 01:05:57.103115 | orchestrator | Monday 30 March 2026 01:03:55 +0000 (0:00:02.202) 0:02:31.692 ********** 2026-03-30 01:05:57.103120 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:57.103125 | orchestrator | 2026-03-30 01:05:57.103131 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-30 01:05:57.103140 | orchestrator | Monday 30 March 2026 01:04:36 +0000 (0:00:41.431) 0:03:13.125 ********** 2026-03-30 01:05:57.103145 | orchestrator | 2026-03-30 01:05:57.103161 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-30 01:05:57.103167 | orchestrator | Monday 30 March 2026 01:04:37 +0000 (0:00:00.101) 0:03:13.227 ********** 2026-03-30 01:05:57.103171 | orchestrator | 2026-03-30 01:05:57.103175 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-30 01:05:57.103180 | orchestrator | Monday 30 March 2026 01:04:37 +0000 (0:00:00.069) 0:03:13.296 ********** 2026-03-30 01:05:57.103184 | orchestrator | 2026-03-30 01:05:57.103188 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-30 01:05:57.103193 | orchestrator | Monday 30 March 2026 01:04:37 +0000 (0:00:00.064) 0:03:13.360 ********** 2026-03-30 01:05:57.103198 | orchestrator | 2026-03-30 01:05:57.103203 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-30 01:05:57.103208 | orchestrator | Monday 30 March 2026 01:04:37 +0000 (0:00:00.095) 0:03:13.456 ********** 2026-03-30 01:05:57.103213 | orchestrator | 2026-03-30 01:05:57.103219 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-30 01:05:57.103224 | orchestrator | Monday 30 March 2026 01:04:37 +0000 (0:00:00.065) 0:03:13.521 ********** 2026-03-30 01:05:57.103229 | orchestrator | 2026-03-30 01:05:57.103234 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-30 01:05:57.103240 | orchestrator | Monday 30 March 2026 01:04:37 +0000 (0:00:00.064) 0:03:13.585 ********** 2026-03-30 01:05:57.103244 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:05:57.103247 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:05:57.103251 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:05:57.103254 | orchestrator | 2026-03-30 01:05:57.103257 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-30 01:05:57.103260 | orchestrator | Monday 30 March 2026 01:05:00 +0000 (0:00:22.947) 0:03:36.533 ********** 2026-03-30 01:05:57.103263 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:05:57.103266 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:05:57.103269 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:05:57.103273 | orchestrator | 2026-03-30 01:05:57.103276 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:05:57.103279 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 01:05:57.103283 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-30 01:05:57.103286 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-30 01:05:57.103289 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 01:05:57.103292 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 01:05:57.103296 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-30 01:05:57.103299 | orchestrator | 2026-03-30 01:05:57.103302 | orchestrator | 2026-03-30 01:05:57.103305 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:05:57.103308 | orchestrator | Monday 30 March 2026 01:05:53 +0000 (0:00:53.561) 0:04:30.094 ********** 2026-03-30 01:05:57.103311 | orchestrator | =============================================================================== 2026-03-30 01:05:57.103317 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.56s 2026-03-30 01:05:57.103324 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.43s 2026-03-30 01:05:57.103327 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.95s 2026-03-30 01:05:57.103330 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.28s 2026-03-30 01:05:57.103334 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.15s 2026-03-30 01:05:57.103337 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.05s 2026-03-30 01:05:57.103340 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.68s 2026-03-30 01:05:57.103343 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.45s 2026-03-30 01:05:57.103346 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.34s 2026-03-30 01:05:57.103349 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.30s 2026-03-30 01:05:57.103352 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.17s 2026-03-30 01:05:57.103356 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.89s 2026-03-30 01:05:57.103361 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.83s 2026-03-30 01:05:57.103364 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.61s 2026-03-30 01:05:57.103368 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.61s 2026-03-30 01:05:57.103371 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.50s 2026-03-30 01:05:57.103374 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.27s 2026-03-30 01:05:57.103377 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.24s 2026-03-30 01:05:57.103380 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.22s 2026-03-30 01:05:57.103383 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.18s 2026-03-30 01:05:57.103440 | orchestrator | 2026-03-30 01:05:57 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:05:57.103447 | orchestrator | 2026-03-30 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:00.142896 | orchestrator | 2026-03-30 01:06:00 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:00.147203 | orchestrator | 2026-03-30 01:06:00 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:00.152505 | orchestrator | 2026-03-30 01:06:00 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:06:00.154489 | orchestrator | 2026-03-30 01:06:00 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:00.154537 | orchestrator | 2026-03-30 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:03.196567 | orchestrator | 2026-03-30 01:06:03 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:03.197025 | orchestrator | 2026-03-30 01:06:03 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:03.200234 | orchestrator | 2026-03-30 01:06:03 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:06:03.200685 | orchestrator | 2026-03-30 01:06:03 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:03.200705 | orchestrator | 2026-03-30 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:06.242790 | orchestrator | 2026-03-30 01:06:06 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:06.244402 | orchestrator | 2026-03-30 01:06:06 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:06.245673 | orchestrator | 2026-03-30 01:06:06 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state STARTED 2026-03-30 01:06:06.247338 | orchestrator | 2026-03-30 01:06:06 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:06.247380 | orchestrator | 2026-03-30 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:09.275918 | orchestrator | 2026-03-30 01:06:09 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:09.275967 | orchestrator | 2026-03-30 01:06:09 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:09.276747 | orchestrator | 2026-03-30 01:06:09 | INFO  | Task 73a776ad-b21b-408e-9dd1-3ce3066bfa95 is in state STARTED 2026-03-30 01:06:09.279584 | orchestrator | 2026-03-30 01:06:09.279622 | orchestrator | 2026-03-30 01:06:09.279627 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:06:09.279631 | orchestrator | 2026-03-30 01:06:09.279634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:06:09.279637 | orchestrator | Monday 30 March 2026 01:05:07 +0000 (0:00:00.314) 0:00:00.314 ********** 2026-03-30 01:06:09.279641 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:06:09.279645 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:06:09.279649 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:06:09.279652 | orchestrator | 2026-03-30 01:06:09.279655 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:06:09.279658 | orchestrator | Monday 30 March 2026 01:05:07 +0000 (0:00:00.302) 0:00:00.616 ********** 2026-03-30 01:06:09.279668 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-30 01:06:09.279672 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-30 01:06:09.279679 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-30 01:06:09.279683 | orchestrator | 2026-03-30 01:06:09.279686 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-30 01:06:09.279689 | orchestrator | 2026-03-30 01:06:09.279692 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-30 01:06:09.279695 | orchestrator | Monday 30 March 2026 01:05:08 +0000 (0:00:00.362) 0:00:00.979 ********** 2026-03-30 01:06:09.279698 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:06:09.279702 | orchestrator | 2026-03-30 01:06:09.279705 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-30 01:06:09.279708 | orchestrator | Monday 30 March 2026 01:05:08 +0000 (0:00:00.815) 0:00:01.794 ********** 2026-03-30 01:06:09.279719 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-30 01:06:09.279723 | orchestrator | 2026-03-30 01:06:09.279726 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-30 01:06:09.279729 | orchestrator | Monday 30 March 2026 01:05:12 +0000 (0:00:03.431) 0:00:05.226 ********** 2026-03-30 01:06:09.279732 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-30 01:06:09.279736 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-30 01:06:09.279739 | orchestrator | 2026-03-30 01:06:09.279742 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-30 01:06:09.279745 | orchestrator | Monday 30 March 2026 01:05:18 +0000 (0:00:06.228) 0:00:11.454 ********** 2026-03-30 01:06:09.279748 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:06:09.279752 | orchestrator | 2026-03-30 01:06:09.279755 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-30 01:06:09.279758 | orchestrator | Monday 30 March 2026 01:05:21 +0000 (0:00:03.278) 0:00:14.732 ********** 2026-03-30 01:06:09.279761 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-30 01:06:09.279764 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:06:09.279777 | orchestrator | 2026-03-30 01:06:09.279780 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-30 01:06:09.279783 | orchestrator | Monday 30 March 2026 01:05:25 +0000 (0:00:03.831) 0:00:18.564 ********** 2026-03-30 01:06:09.279786 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:06:09.279790 | orchestrator | 2026-03-30 01:06:09.279795 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-30 01:06:09.279801 | orchestrator | Monday 30 March 2026 01:05:28 +0000 (0:00:02.907) 0:00:21.472 ********** 2026-03-30 01:06:09.279805 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-30 01:06:09.279810 | orchestrator | 2026-03-30 01:06:09.279815 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-30 01:06:09.279820 | orchestrator | Monday 30 March 2026 01:05:32 +0000 (0:00:03.586) 0:00:25.058 ********** 2026-03-30 01:06:09.279825 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:06:09.279830 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:06:09.279835 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:06:09.279839 | orchestrator | 2026-03-30 01:06:09.279844 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-30 01:06:09.279848 | orchestrator | Monday 30 March 2026 01:05:32 +0000 (0:00:00.256) 0:00:25.315 ********** 2026-03-30 01:06:09.279856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.279873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.279882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.279893 | orchestrator | 2026-03-30 01:06:09.279898 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-30 01:06:09.279903 | orchestrator | Monday 30 March 2026 01:05:34 +0000 (0:00:01.540) 0:00:26.855 ********** 2026-03-30 01:06:09.279909 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:06:09.279914 | orchestrator | 2026-03-30 01:06:09.279919 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-30 01:06:09.279924 | orchestrator | Monday 30 March 2026 01:05:34 +0000 (0:00:00.108) 0:00:26.964 ********** 2026-03-30 01:06:09.279928 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:06:09.279933 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:06:09.279939 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:06:09.279943 | orchestrator | 2026-03-30 01:06:09.279948 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-30 01:06:09.279953 | orchestrator | Monday 30 March 2026 01:05:34 +0000 (0:00:00.294) 0:00:27.259 ********** 2026-03-30 01:06:09.279958 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:06:09.279963 | orchestrator | 2026-03-30 01:06:09.279968 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-30 01:06:09.279973 | orchestrator | Monday 30 March 2026 01:05:35 +0000 (0:00:00.617) 0:00:27.877 ********** 2026-03-30 01:06:09.279978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.279988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.279992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.279998 | orchestrator | 2026-03-30 01:06:09.280004 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-30 01:06:09.280007 | orchestrator | Monday 30 March 2026 01:05:36 +0000 (0:00:01.573) 0:00:29.450 ********** 2026-03-30 01:06:09.280010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280013 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:06:09.280017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280020 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:06:09.280025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280029 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:06:09.280032 | orchestrator | 2026-03-30 01:06:09.280035 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-30 01:06:09.280038 | orchestrator | Monday 30 March 2026 01:05:37 +0000 (0:00:00.614) 0:00:30.065 ********** 2026-03-30 01:06:09.280042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280047 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:06:09.280052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280056 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:06:09.280059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280062 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:06:09.280065 | orchestrator | 2026-03-30 01:06:09.280068 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-30 01:06:09.280071 | orchestrator | Monday 30 March 2026 01:05:37 +0000 (0:00:00.739) 0:00:30.804 ********** 2026-03-30 01:06:09.280077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280091 | orchestrator | 2026-03-30 01:06:09.280095 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-30 01:06:09.280098 | orchestrator | Monday 30 March 2026 01:05:39 +0000 (0:00:01.539) 0:00:32.343 ********** 2026-03-30 01:06:09.280101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280117 | orchestrator | 2026-03-30 01:06:09.280120 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-30 01:06:09.280123 | orchestrator | Monday 30 March 2026 01:05:41 +0000 (0:00:02.179) 0:00:34.523 ********** 2026-03-30 01:06:09.280138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-30 01:06:09.280144 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-30 01:06:09.280148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-30 01:06:09.280151 | orchestrator | 2026-03-30 01:06:09.280154 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-30 01:06:09.280159 | orchestrator | Monday 30 March 2026 01:05:43 +0000 (0:00:01.404) 0:00:35.927 ********** 2026-03-30 01:06:09.280165 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:06:09.280172 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:06:09.280180 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:06:09.280185 | orchestrator | 2026-03-30 01:06:09.280190 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-30 01:06:09.280195 | orchestrator | Monday 30 March 2026 01:05:44 +0000 (0:00:01.290) 0:00:37.217 ********** 2026-03-30 01:06:09.280200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280205 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:06:09.280210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280215 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:06:09.280224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-30 01:06:09.280234 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:06:09.280239 | orchestrator | 2026-03-30 01:06:09.280244 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-30 01:06:09.280249 | orchestrator | Monday 30 March 2026 01:05:45 +0000 (0:00:00.735) 0:00:37.953 ********** 2026-03-30 01:06:09.280257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-30 01:06:09.280272 | orchestrator | 2026-03-30 01:06:09.280277 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-30 01:06:09.280282 | orchestrator | Monday 30 March 2026 01:05:46 +0000 (0:00:01.006) 0:00:38.959 ********** 2026-03-30 01:06:09.280288 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:06:09.280300 | orchestrator | 2026-03-30 01:06:09.280305 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-30 01:06:09.280310 | orchestrator | Monday 30 March 2026 01:05:48 +0000 (0:00:01.951) 0:00:40.911 ********** 2026-03-30 01:06:09.280315 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:06:09.280320 | orchestrator | 2026-03-30 01:06:09.280325 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-30 01:06:09.280331 | orchestrator | Monday 30 March 2026 01:05:50 +0000 (0:00:02.113) 0:00:43.024 ********** 2026-03-30 01:06:09.280335 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:06:09.280350 | orchestrator | 2026-03-30 01:06:09.280360 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-30 01:06:09.280365 | orchestrator | Monday 30 March 2026 01:06:02 +0000 (0:00:12.139) 0:00:55.164 ********** 2026-03-30 01:06:09.280370 | orchestrator | 2026-03-30 01:06:09.280376 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-30 01:06:09.280381 | orchestrator | Monday 30 March 2026 01:06:02 +0000 (0:00:00.078) 0:00:55.243 ********** 2026-03-30 01:06:09.280387 | orchestrator | 2026-03-30 01:06:09.280396 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-30 01:06:09.280401 | orchestrator | Monday 30 March 2026 01:06:02 +0000 (0:00:00.065) 0:00:55.309 ********** 2026-03-30 01:06:09.280407 | orchestrator | 2026-03-30 01:06:09.280413 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-30 01:06:09.280418 | orchestrator | Monday 30 March 2026 01:06:02 +0000 (0:00:00.065) 0:00:55.375 ********** 2026-03-30 01:06:09.280423 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:06:09.280427 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:06:09.280430 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:06:09.280434 | orchestrator | 2026-03-30 01:06:09.280438 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:06:09.280442 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-30 01:06:09.280447 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 01:06:09.280450 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 01:06:09.280454 | orchestrator | 2026-03-30 01:06:09.280458 | orchestrator | 2026-03-30 01:06:09.280462 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:06:09.280465 | orchestrator | Monday 30 March 2026 01:06:07 +0000 (0:00:04.991) 0:01:00.366 ********** 2026-03-30 01:06:09.280469 | orchestrator | =============================================================================== 2026-03-30 01:06:09.280476 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.14s 2026-03-30 01:06:09.280480 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.23s 2026-03-30 01:06:09.280483 | orchestrator | placement : Restart placement-api container ----------------------------- 4.99s 2026-03-30 01:06:09.280487 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.83s 2026-03-30 01:06:09.280490 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.59s 2026-03-30 01:06:09.280494 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.43s 2026-03-30 01:06:09.280498 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.28s 2026-03-30 01:06:09.280502 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.91s 2026-03-30 01:06:09.280506 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.18s 2026-03-30 01:06:09.280509 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.11s 2026-03-30 01:06:09.280513 | orchestrator | placement : Creating placement databases -------------------------------- 1.95s 2026-03-30 01:06:09.280521 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2026-03-30 01:06:09.280524 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.54s 2026-03-30 01:06:09.280528 | orchestrator | placement : Copying over config.json files for services ----------------- 1.54s 2026-03-30 01:06:09.280532 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.40s 2026-03-30 01:06:09.280535 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.29s 2026-03-30 01:06:09.280540 | orchestrator | placement : Check placement containers ---------------------------------- 1.01s 2026-03-30 01:06:09.280545 | orchestrator | placement : include_tasks ----------------------------------------------- 0.82s 2026-03-30 01:06:09.280550 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.74s 2026-03-30 01:06:09.280558 | orchestrator | placement : Copying over existing policy file --------------------------- 0.74s 2026-03-30 01:06:09.280564 | orchestrator | 2026-03-30 01:06:09 | INFO  | Task 5f7cfbbd-5bb6-4ab1-ad9f-67022c272b61 is in state SUCCESS 2026-03-30 01:06:09.280570 | orchestrator | 2026-03-30 01:06:09 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:09.280575 | orchestrator | 2026-03-30 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:12.316694 | orchestrator | 2026-03-30 01:06:12 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:12.318250 | orchestrator | 2026-03-30 01:06:12 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:12.319535 | orchestrator | 2026-03-30 01:06:12 | INFO  | Task 73a776ad-b21b-408e-9dd1-3ce3066bfa95 is in state STARTED 2026-03-30 01:06:12.320952 | orchestrator | 2026-03-30 01:06:12 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:12.320993 | orchestrator | 2026-03-30 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:15.352910 | orchestrator | 2026-03-30 01:06:15 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:15.352962 | orchestrator | 2026-03-30 01:06:15 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:15.352968 | orchestrator | 2026-03-30 01:06:15 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:15.352974 | orchestrator | 2026-03-30 01:06:15 | INFO  | Task 73a776ad-b21b-408e-9dd1-3ce3066bfa95 is in state SUCCESS 2026-03-30 01:06:15.352980 | orchestrator | 2026-03-30 01:06:15 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:15.352986 | orchestrator | 2026-03-30 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:18.387272 | orchestrator | 2026-03-30 01:06:18 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:18.389640 | orchestrator | 2026-03-30 01:06:18 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:18.391309 | orchestrator | 2026-03-30 01:06:18 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:18.392961 | orchestrator | 2026-03-30 01:06:18 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:18.393313 | orchestrator | 2026-03-30 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:21.424710 | orchestrator | 2026-03-30 01:06:21 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:21.426055 | orchestrator | 2026-03-30 01:06:21 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:21.428369 | orchestrator | 2026-03-30 01:06:21 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:21.430045 | orchestrator | 2026-03-30 01:06:21 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:21.430250 | orchestrator | 2026-03-30 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:24.465995 | orchestrator | 2026-03-30 01:06:24 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:24.466421 | orchestrator | 2026-03-30 01:06:24 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:24.467076 | orchestrator | 2026-03-30 01:06:24 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:24.467829 | orchestrator | 2026-03-30 01:06:24 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:24.467856 | orchestrator | 2026-03-30 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:27.495630 | orchestrator | 2026-03-30 01:06:27 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:27.496217 | orchestrator | 2026-03-30 01:06:27 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:27.497206 | orchestrator | 2026-03-30 01:06:27 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:27.497853 | orchestrator | 2026-03-30 01:06:27 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:27.497883 | orchestrator | 2026-03-30 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:30.526599 | orchestrator | 2026-03-30 01:06:30 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:30.526648 | orchestrator | 2026-03-30 01:06:30 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:30.526654 | orchestrator | 2026-03-30 01:06:30 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:30.528063 | orchestrator | 2026-03-30 01:06:30 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:30.528114 | orchestrator | 2026-03-30 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:33.565393 | orchestrator | 2026-03-30 01:06:33 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:33.567375 | orchestrator | 2026-03-30 01:06:33 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:33.569197 | orchestrator | 2026-03-30 01:06:33 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:33.570724 | orchestrator | 2026-03-30 01:06:33 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:33.570796 | orchestrator | 2026-03-30 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:36.621764 | orchestrator | 2026-03-30 01:06:36 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:36.621812 | orchestrator | 2026-03-30 01:06:36 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:36.621917 | orchestrator | 2026-03-30 01:06:36 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:36.624358 | orchestrator | 2026-03-30 01:06:36 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:36.624461 | orchestrator | 2026-03-30 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:39.663862 | orchestrator | 2026-03-30 01:06:39 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:39.665455 | orchestrator | 2026-03-30 01:06:39 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:39.666612 | orchestrator | 2026-03-30 01:06:39 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:39.667304 | orchestrator | 2026-03-30 01:06:39 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:39.667323 | orchestrator | 2026-03-30 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:42.709677 | orchestrator | 2026-03-30 01:06:42 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:42.710493 | orchestrator | 2026-03-30 01:06:42 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:42.711808 | orchestrator | 2026-03-30 01:06:42 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:42.713126 | orchestrator | 2026-03-30 01:06:42 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:42.713187 | orchestrator | 2026-03-30 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:45.743954 | orchestrator | 2026-03-30 01:06:45 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:45.745251 | orchestrator | 2026-03-30 01:06:45 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:45.745299 | orchestrator | 2026-03-30 01:06:45 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:45.747280 | orchestrator | 2026-03-30 01:06:45 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:45.747336 | orchestrator | 2026-03-30 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:48.780097 | orchestrator | 2026-03-30 01:06:48 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:48.783148 | orchestrator | 2026-03-30 01:06:48 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:48.783722 | orchestrator | 2026-03-30 01:06:48 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:48.784618 | orchestrator | 2026-03-30 01:06:48 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:48.784674 | orchestrator | 2026-03-30 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:51.824726 | orchestrator | 2026-03-30 01:06:51 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:51.829185 | orchestrator | 2026-03-30 01:06:51 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:51.834751 | orchestrator | 2026-03-30 01:06:51 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:51.837791 | orchestrator | 2026-03-30 01:06:51 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:51.837833 | orchestrator | 2026-03-30 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:54.879081 | orchestrator | 2026-03-30 01:06:54 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:54.882411 | orchestrator | 2026-03-30 01:06:54 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:54.884311 | orchestrator | 2026-03-30 01:06:54 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:54.886388 | orchestrator | 2026-03-30 01:06:54 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:54.886856 | orchestrator | 2026-03-30 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:06:57.930323 | orchestrator | 2026-03-30 01:06:57 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:06:57.932731 | orchestrator | 2026-03-30 01:06:57 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:06:57.935541 | orchestrator | 2026-03-30 01:06:57 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:06:57.937692 | orchestrator | 2026-03-30 01:06:57 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:06:57.937835 | orchestrator | 2026-03-30 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:00.981575 | orchestrator | 2026-03-30 01:07:00 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:07:00.981619 | orchestrator | 2026-03-30 01:07:00 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:00.982510 | orchestrator | 2026-03-30 01:07:00 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:00.983557 | orchestrator | 2026-03-30 01:07:00 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:00.983591 | orchestrator | 2026-03-30 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:04.043910 | orchestrator | 2026-03-30 01:07:04 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:07:04.046404 | orchestrator | 2026-03-30 01:07:04 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:04.048526 | orchestrator | 2026-03-30 01:07:04 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:04.051432 | orchestrator | 2026-03-30 01:07:04 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:04.051494 | orchestrator | 2026-03-30 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:07.083525 | orchestrator | 2026-03-30 01:07:07 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state STARTED 2026-03-30 01:07:07.085326 | orchestrator | 2026-03-30 01:07:07 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:07.085431 | orchestrator | 2026-03-30 01:07:07 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:07.086841 | orchestrator | 2026-03-30 01:07:07 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:07.086888 | orchestrator | 2026-03-30 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:10.120694 | orchestrator | 2026-03-30 01:07:10.120769 | orchestrator | 2026-03-30 01:07:10.120778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:07:10.120785 | orchestrator | 2026-03-30 01:07:10.120790 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:07:10.120800 | orchestrator | Monday 30 March 2026 01:06:10 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-03-30 01:07:10.120809 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:10.120816 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:10.120821 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:10.120826 | orchestrator | 2026-03-30 01:07:10.120831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:07:10.120836 | orchestrator | Monday 30 March 2026 01:06:11 +0000 (0:00:00.298) 0:00:00.480 ********** 2026-03-30 01:07:10.120841 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-30 01:07:10.120847 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-30 01:07:10.120852 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-30 01:07:10.120872 | orchestrator | 2026-03-30 01:07:10.120878 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-30 01:07:10.120883 | orchestrator | 2026-03-30 01:07:10.120888 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-30 01:07:10.120893 | orchestrator | Monday 30 March 2026 01:06:11 +0000 (0:00:00.444) 0:00:00.925 ********** 2026-03-30 01:07:10.120898 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:10.120903 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:10.120921 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:10.120927 | orchestrator | 2026-03-30 01:07:10.120932 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:07:10.120938 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 01:07:10.120944 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 01:07:10.120949 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 01:07:10.120954 | orchestrator | 2026-03-30 01:07:10.120959 | orchestrator | 2026-03-30 01:07:10.120964 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:07:10.120969 | orchestrator | Monday 30 March 2026 01:06:12 +0000 (0:00:01.173) 0:00:02.098 ********** 2026-03-30 01:07:10.120974 | orchestrator | =============================================================================== 2026-03-30 01:07:10.120979 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.17s 2026-03-30 01:07:10.120984 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-30 01:07:10.120989 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-30 01:07:10.120994 | orchestrator | 2026-03-30 01:07:10.120999 | orchestrator | 2026-03-30 01:07:10.121004 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:07:10.121009 | orchestrator | 2026-03-30 01:07:10.121014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:07:10.121019 | orchestrator | Monday 30 March 2026 01:05:32 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-03-30 01:07:10.121024 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:10.121029 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:10.121034 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:10.121039 | orchestrator | 2026-03-30 01:07:10.121044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:07:10.121049 | orchestrator | Monday 30 March 2026 01:05:32 +0000 (0:00:00.245) 0:00:00.515 ********** 2026-03-30 01:07:10.121053 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-30 01:07:10.121058 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-30 01:07:10.121063 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-30 01:07:10.121068 | orchestrator | 2026-03-30 01:07:10.121073 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-30 01:07:10.121079 | orchestrator | 2026-03-30 01:07:10.121083 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-30 01:07:10.121089 | orchestrator | Monday 30 March 2026 01:05:33 +0000 (0:00:00.281) 0:00:00.797 ********** 2026-03-30 01:07:10.121094 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:10.121099 | orchestrator | 2026-03-30 01:07:10.121103 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-30 01:07:10.121109 | orchestrator | Monday 30 March 2026 01:05:33 +0000 (0:00:00.557) 0:00:01.354 ********** 2026-03-30 01:07:10.121114 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-30 01:07:10.121119 | orchestrator | 2026-03-30 01:07:10.121124 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-30 01:07:10.121142 | orchestrator | Monday 30 March 2026 01:05:37 +0000 (0:00:03.810) 0:00:05.165 ********** 2026-03-30 01:07:10.121148 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-30 01:07:10.121153 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-30 01:07:10.121157 | orchestrator | 2026-03-30 01:07:10.121163 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-30 01:07:10.121167 | orchestrator | Monday 30 March 2026 01:05:44 +0000 (0:00:06.901) 0:00:12.067 ********** 2026-03-30 01:07:10.121173 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:07:10.121178 | orchestrator | 2026-03-30 01:07:10.121182 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-30 01:07:10.121242 | orchestrator | Monday 30 March 2026 01:05:47 +0000 (0:00:02.922) 0:00:14.990 ********** 2026-03-30 01:07:10.121260 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-30 01:07:10.121266 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:07:10.121271 | orchestrator | 2026-03-30 01:07:10.121276 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-30 01:07:10.121280 | orchestrator | Monday 30 March 2026 01:05:50 +0000 (0:00:03.469) 0:00:18.460 ********** 2026-03-30 01:07:10.121285 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:07:10.121290 | orchestrator | 2026-03-30 01:07:10.121295 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-30 01:07:10.121300 | orchestrator | Monday 30 March 2026 01:05:53 +0000 (0:00:02.944) 0:00:21.404 ********** 2026-03-30 01:07:10.121305 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-30 01:07:10.121310 | orchestrator | 2026-03-30 01:07:10.121315 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-30 01:07:10.121320 | orchestrator | Monday 30 March 2026 01:05:56 +0000 (0:00:03.170) 0:00:24.575 ********** 2026-03-30 01:07:10.121325 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.121330 | orchestrator | 2026-03-30 01:07:10.121335 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-30 01:07:10.121340 | orchestrator | Monday 30 March 2026 01:05:59 +0000 (0:00:03.001) 0:00:27.576 ********** 2026-03-30 01:07:10.121344 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.121349 | orchestrator | 2026-03-30 01:07:10.121354 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-30 01:07:10.121360 | orchestrator | Monday 30 March 2026 01:06:03 +0000 (0:00:03.447) 0:00:31.024 ********** 2026-03-30 01:07:10.121365 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.121370 | orchestrator | 2026-03-30 01:07:10.121375 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-30 01:07:10.121380 | orchestrator | Monday 30 March 2026 01:06:06 +0000 (0:00:03.254) 0:00:34.279 ********** 2026-03-30 01:07:10.121386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121434 | orchestrator | 2026-03-30 01:07:10.121439 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-30 01:07:10.121448 | orchestrator | Monday 30 March 2026 01:06:08 +0000 (0:00:01.786) 0:00:36.066 ********** 2026-03-30 01:07:10.121452 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:10.121458 | orchestrator | 2026-03-30 01:07:10.121463 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-30 01:07:10.121468 | orchestrator | Monday 30 March 2026 01:06:08 +0000 (0:00:00.098) 0:00:36.164 ********** 2026-03-30 01:07:10.121472 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:10.121477 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:10.121482 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:10.121487 | orchestrator | 2026-03-30 01:07:10.121492 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-30 01:07:10.121496 | orchestrator | Monday 30 March 2026 01:06:08 +0000 (0:00:00.262) 0:00:36.427 ********** 2026-03-30 01:07:10.121502 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:07:10.121506 | orchestrator | 2026-03-30 01:07:10.121511 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-30 01:07:10.121516 | orchestrator | Monday 30 March 2026 01:06:09 +0000 (0:00:00.759) 0:00:37.186 ********** 2026-03-30 01:07:10.121525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121570 | orchestrator | 2026-03-30 01:07:10.121573 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-30 01:07:10.121576 | orchestrator | Monday 30 March 2026 01:06:11 +0000 (0:00:02.372) 0:00:39.558 ********** 2026-03-30 01:07:10.121579 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:10.121583 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:10.121586 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:10.121589 | orchestrator | 2026-03-30 01:07:10.121592 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-30 01:07:10.121597 | orchestrator | Monday 30 March 2026 01:06:12 +0000 (0:00:00.382) 0:00:39.941 ********** 2026-03-30 01:07:10.121601 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:10.121604 | orchestrator | 2026-03-30 01:07:10.121615 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-30 01:07:10.121618 | orchestrator | Monday 30 March 2026 01:06:12 +0000 (0:00:00.449) 0:00:40.391 ********** 2026-03-30 01:07:10.121621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121655 | orchestrator | 2026-03-30 01:07:10.121658 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-30 01:07:10.121662 | orchestrator | Monday 30 March 2026 01:06:15 +0000 (0:00:02.471) 0:00:42.863 ********** 2026-03-30 01:07:10.121665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121672 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:10.121677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121687 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:10.121690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121700 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:10.121703 | orchestrator | 2026-03-30 01:07:10.121706 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-30 01:07:10.121709 | orchestrator | Monday 30 March 2026 01:06:15 +0000 (0:00:00.836) 0:00:43.700 ********** 2026-03-30 01:07:10.121712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121723 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:10.121729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121738 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:10.121741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121748 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:10.121751 | orchestrator | 2026-03-30 01:07:10.121755 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-30 01:07:10.121760 | orchestrator | Monday 30 March 2026 01:06:16 +0000 (0:00:00.795) 0:00:44.496 ********** 2026-03-30 01:07:10.121766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121794 | orchestrator | 2026-03-30 01:07:10.121798 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-30 01:07:10.121801 | orchestrator | Monday 30 March 2026 01:06:19 +0000 (0:00:02.395) 0:00:46.891 ********** 2026-03-30 01:07:10.121804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121831 | orchestrator | 2026-03-30 01:07:10.121834 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-30 01:07:10.121838 | orchestrator | Monday 30 March 2026 01:06:23 +0000 (0:00:04.661) 0:00:51.552 ********** 2026-03-30 01:07:10.121841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121847 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:10.121852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121864 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:10.121867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-30 01:07:10.121871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:10.121874 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:10.121877 | orchestrator | 2026-03-30 01:07:10.121881 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-30 01:07:10.121884 | orchestrator | Monday 30 March 2026 01:06:24 +0000 (0:00:00.583) 0:00:52.136 ********** 2026-03-30 01:07:10.121887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-30 01:07:10.121904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:10.121916 | orchestrator | 2026-03-30 01:07:10.121920 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-30 01:07:10.121923 | orchestrator | Monday 30 March 2026 01:06:26 +0000 (0:00:01.777) 0:00:53.913 ********** 2026-03-30 01:07:10.121928 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:10.121932 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:10.121935 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:10.121938 | orchestrator | 2026-03-30 01:07:10.121941 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-30 01:07:10.121944 | orchestrator | Monday 30 March 2026 01:06:26 +0000 (0:00:00.402) 0:00:54.316 ********** 2026-03-30 01:07:10.121947 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.121950 | orchestrator | 2026-03-30 01:07:10.121953 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-30 01:07:10.121957 | orchestrator | Monday 30 March 2026 01:06:28 +0000 (0:00:01.894) 0:00:56.210 ********** 2026-03-30 01:07:10.121960 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.121963 | orchestrator | 2026-03-30 01:07:10.121966 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-30 01:07:10.121969 | orchestrator | Monday 30 March 2026 01:06:30 +0000 (0:00:01.933) 0:00:58.144 ********** 2026-03-30 01:07:10.121974 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.121977 | orchestrator | 2026-03-30 01:07:10.121981 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-30 01:07:10.121984 | orchestrator | Monday 30 March 2026 01:06:46 +0000 (0:00:16.004) 0:01:14.149 ********** 2026-03-30 01:07:10.121987 | orchestrator | 2026-03-30 01:07:10.121990 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-30 01:07:10.121993 | orchestrator | Monday 30 March 2026 01:06:46 +0000 (0:00:00.591) 0:01:14.740 ********** 2026-03-30 01:07:10.121996 | orchestrator | 2026-03-30 01:07:10.121999 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-30 01:07:10.122003 | orchestrator | Monday 30 March 2026 01:06:47 +0000 (0:00:00.305) 0:01:15.046 ********** 2026-03-30 01:07:10.122006 | orchestrator | 2026-03-30 01:07:10.122009 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-30 01:07:10.122041 | orchestrator | Monday 30 March 2026 01:06:47 +0000 (0:00:00.154) 0:01:15.201 ********** 2026-03-30 01:07:10.122053 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.122058 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:10.122062 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:10.122067 | orchestrator | 2026-03-30 01:07:10.122072 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-30 01:07:10.122076 | orchestrator | Monday 30 March 2026 01:06:59 +0000 (0:00:12.195) 0:01:27.396 ********** 2026-03-30 01:07:10.122082 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:10.122086 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:10.122090 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:10.122095 | orchestrator | 2026-03-30 01:07:10.122099 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:07:10.122105 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-30 01:07:10.122110 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 01:07:10.122115 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 01:07:10.122120 | orchestrator | 2026-03-30 01:07:10.122125 | orchestrator | 2026-03-30 01:07:10.122129 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:07:10.122134 | orchestrator | Monday 30 March 2026 01:07:08 +0000 (0:00:08.676) 0:01:36.072 ********** 2026-03-30 01:07:10.122144 | orchestrator | =============================================================================== 2026-03-30 01:07:10.122149 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.00s 2026-03-30 01:07:10.122154 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.20s 2026-03-30 01:07:10.122159 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 8.68s 2026-03-30 01:07:10.122164 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.90s 2026-03-30 01:07:10.122168 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.66s 2026-03-30 01:07:10.122174 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.81s 2026-03-30 01:07:10.122179 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.47s 2026-03-30 01:07:10.122184 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.45s 2026-03-30 01:07:10.122204 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.25s 2026-03-30 01:07:10.122210 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.17s 2026-03-30 01:07:10.122216 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.00s 2026-03-30 01:07:10.122220 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.94s 2026-03-30 01:07:10.122223 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.92s 2026-03-30 01:07:10.122226 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.47s 2026-03-30 01:07:10.122230 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.40s 2026-03-30 01:07:10.122233 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.37s 2026-03-30 01:07:10.122236 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.93s 2026-03-30 01:07:10.122239 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.89s 2026-03-30 01:07:10.122242 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.79s 2026-03-30 01:07:10.122249 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.78s 2026-03-30 01:07:10.122253 | orchestrator | 2026-03-30 01:07:10 | INFO  | Task e380468d-46e0-465b-b019-f28db6a17cbf is in state SUCCESS 2026-03-30 01:07:10.122256 | orchestrator | 2026-03-30 01:07:10 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:10.122259 | orchestrator | 2026-03-30 01:07:10 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:10.122528 | orchestrator | 2026-03-30 01:07:10 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:10.122584 | orchestrator | 2026-03-30 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:13.147544 | orchestrator | 2026-03-30 01:07:13 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:13.149630 | orchestrator | 2026-03-30 01:07:13 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:13.149952 | orchestrator | 2026-03-30 01:07:13 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:13.150096 | orchestrator | 2026-03-30 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:16.181595 | orchestrator | 2026-03-30 01:07:16 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:16.181981 | orchestrator | 2026-03-30 01:07:16 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:16.183379 | orchestrator | 2026-03-30 01:07:16 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:16.183415 | orchestrator | 2026-03-30 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:19.230270 | orchestrator | 2026-03-30 01:07:19 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:19.232755 | orchestrator | 2026-03-30 01:07:19 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:19.235341 | orchestrator | 2026-03-30 01:07:19 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:19.235450 | orchestrator | 2026-03-30 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:22.281038 | orchestrator | 2026-03-30 01:07:22 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:22.283016 | orchestrator | 2026-03-30 01:07:22 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:22.284744 | orchestrator | 2026-03-30 01:07:22 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:22.284995 | orchestrator | 2026-03-30 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:25.316100 | orchestrator | 2026-03-30 01:07:25 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:25.318067 | orchestrator | 2026-03-30 01:07:25 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:25.320320 | orchestrator | 2026-03-30 01:07:25 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:25.320370 | orchestrator | 2026-03-30 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:28.363739 | orchestrator | 2026-03-30 01:07:28 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:28.365992 | orchestrator | 2026-03-30 01:07:28 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:28.368245 | orchestrator | 2026-03-30 01:07:28 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:28.368279 | orchestrator | 2026-03-30 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:31.404024 | orchestrator | 2026-03-30 01:07:31 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:31.406727 | orchestrator | 2026-03-30 01:07:31 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:31.409345 | orchestrator | 2026-03-30 01:07:31 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:31.409396 | orchestrator | 2026-03-30 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:34.452419 | orchestrator | 2026-03-30 01:07:34 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:34.453940 | orchestrator | 2026-03-30 01:07:34 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:34.455169 | orchestrator | 2026-03-30 01:07:34 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:34.455196 | orchestrator | 2026-03-30 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:37.501892 | orchestrator | 2026-03-30 01:07:37 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:37.503462 | orchestrator | 2026-03-30 01:07:37 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:37.505274 | orchestrator | 2026-03-30 01:07:37 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:37.505310 | orchestrator | 2026-03-30 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:40.553192 | orchestrator | 2026-03-30 01:07:40 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:40.555151 | orchestrator | 2026-03-30 01:07:40 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:40.557837 | orchestrator | 2026-03-30 01:07:40 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:40.557873 | orchestrator | 2026-03-30 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:43.595831 | orchestrator | 2026-03-30 01:07:43 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:43.598198 | orchestrator | 2026-03-30 01:07:43 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:43.600476 | orchestrator | 2026-03-30 01:07:43 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:43.600539 | orchestrator | 2026-03-30 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:46.643205 | orchestrator | 2026-03-30 01:07:46 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:46.645599 | orchestrator | 2026-03-30 01:07:46 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:46.647930 | orchestrator | 2026-03-30 01:07:46 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:46.647975 | orchestrator | 2026-03-30 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:49.684298 | orchestrator | 2026-03-30 01:07:49 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:49.684996 | orchestrator | 2026-03-30 01:07:49 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state STARTED 2026-03-30 01:07:49.685721 | orchestrator | 2026-03-30 01:07:49 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state STARTED 2026-03-30 01:07:49.685749 | orchestrator | 2026-03-30 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:52.728238 | orchestrator | 2026-03-30 01:07:52 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:52.734524 | orchestrator | 2026-03-30 01:07:52.734586 | orchestrator | 2026-03-30 01:07:52 | INFO  | Task bfebb8ce-3cb5-4822-a06c-f03421e686a7 is in state SUCCESS 2026-03-30 01:07:52.735777 | orchestrator | 2026-03-30 01:07:52.735807 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:07:52.735813 | orchestrator | 2026-03-30 01:07:52.735818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:07:52.735822 | orchestrator | Monday 30 March 2026 01:05:57 +0000 (0:00:00.238) 0:00:00.238 ********** 2026-03-30 01:07:52.735826 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.735831 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:52.735835 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:52.735838 | orchestrator | 2026-03-30 01:07:52.735842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:07:52.735846 | orchestrator | Monday 30 March 2026 01:05:57 +0000 (0:00:00.242) 0:00:00.480 ********** 2026-03-30 01:07:52.735850 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-30 01:07:52.735854 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-30 01:07:52.735859 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-30 01:07:52.735863 | orchestrator | 2026-03-30 01:07:52.735867 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-30 01:07:52.735870 | orchestrator | 2026-03-30 01:07:52.735874 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-30 01:07:52.735878 | orchestrator | Monday 30 March 2026 01:05:57 +0000 (0:00:00.265) 0:00:00.745 ********** 2026-03-30 01:07:52.735882 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.735898 | orchestrator | 2026-03-30 01:07:52.735902 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-30 01:07:52.735906 | orchestrator | Monday 30 March 2026 01:05:58 +0000 (0:00:00.503) 0:00:01.249 ********** 2026-03-30 01:07:52.735919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.735924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.735929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.735933 | orchestrator | 2026-03-30 01:07:52.735937 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-30 01:07:52.735941 | orchestrator | Monday 30 March 2026 01:05:59 +0000 (0:00:00.974) 0:00:02.223 ********** 2026-03-30 01:07:52.735944 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-30 01:07:52.735949 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-30 01:07:52.735953 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:07:52.736001 | orchestrator | 2026-03-30 01:07:52.736006 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-30 01:07:52.736010 | orchestrator | Monday 30 March 2026 01:05:59 +0000 (0:00:00.890) 0:00:03.114 ********** 2026-03-30 01:07:52.736014 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.736018 | orchestrator | 2026-03-30 01:07:52.736022 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-30 01:07:52.736026 | orchestrator | Monday 30 March 2026 01:06:00 +0000 (0:00:00.492) 0:00:03.607 ********** 2026-03-30 01:07:52.736036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736055 | orchestrator | 2026-03-30 01:07:52.736059 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-30 01:07:52.736063 | orchestrator | Monday 30 March 2026 01:06:01 +0000 (0:00:01.452) 0:00:05.059 ********** 2026-03-30 01:07:52.736067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 01:07:52.736071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 01:07:52.736075 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.736079 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.736085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 01:07:52.736093 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.736097 | orchestrator | 2026-03-30 01:07:52.736101 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-30 01:07:52.736105 | orchestrator | Monday 30 March 2026 01:06:02 +0000 (0:00:00.393) 0:00:05.453 ********** 2026-03-30 01:07:52.736109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 01:07:52.736113 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.736135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 01:07:52.736140 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.736144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-30 01:07:52.736148 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.736152 | orchestrator | 2026-03-30 01:07:52.736156 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-30 01:07:52.736160 | orchestrator | Monday 30 March 2026 01:06:02 +0000 (0:00:00.685) 0:00:06.138 ********** 2026-03-30 01:07:52.736163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736195 | orchestrator | 2026-03-30 01:07:52.736199 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-30 01:07:52.736208 | orchestrator | Monday 30 March 2026 01:06:04 +0000 (0:00:01.409) 0:00:07.548 ********** 2026-03-30 01:07:52.736212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.736227 | orchestrator | 2026-03-30 01:07:52.736231 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-30 01:07:52.736243 | orchestrator | Monday 30 March 2026 01:06:05 +0000 (0:00:01.158) 0:00:08.707 ********** 2026-03-30 01:07:52.736247 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.736251 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.736255 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.736259 | orchestrator | 2026-03-30 01:07:52.736262 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-30 01:07:52.736266 | orchestrator | Monday 30 March 2026 01:06:05 +0000 (0:00:00.250) 0:00:08.957 ********** 2026-03-30 01:07:52.736279 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-30 01:07:52.736283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-30 01:07:52.736293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-30 01:07:52.736297 | orchestrator | 2026-03-30 01:07:52.736301 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-30 01:07:52.736305 | orchestrator | Monday 30 March 2026 01:06:06 +0000 (0:00:01.087) 0:00:10.044 ********** 2026-03-30 01:07:52.736309 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-30 01:07:52.736313 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-30 01:07:52.736316 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-30 01:07:52.736320 | orchestrator | 2026-03-30 01:07:52.736324 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-30 01:07:52.736328 | orchestrator | Monday 30 March 2026 01:06:08 +0000 (0:00:01.225) 0:00:11.270 ********** 2026-03-30 01:07:52.736335 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:07:52.736339 | orchestrator | 2026-03-30 01:07:52.736343 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-30 01:07:52.736346 | orchestrator | Monday 30 March 2026 01:06:08 +0000 (0:00:00.820) 0:00:12.090 ********** 2026-03-30 01:07:52.736350 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-30 01:07:52.736354 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-30 01:07:52.736358 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.736362 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:52.736366 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:52.736370 | orchestrator | 2026-03-30 01:07:52.736374 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-30 01:07:52.736378 | orchestrator | Monday 30 March 2026 01:06:09 +0000 (0:00:00.598) 0:00:12.689 ********** 2026-03-30 01:07:52.736382 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.736385 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.736389 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.736393 | orchestrator | 2026-03-30 01:07:52.736397 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-30 01:07:52.736401 | orchestrator | Monday 30 March 2026 01:06:09 +0000 (0:00:00.420) 0:00:13.109 ********** 2026-03-30 01:07:52.736407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1096352, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2943528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1096352, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2943528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1096352, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2943528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1096382, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3313634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1096382, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3313634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1096382, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3313634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1096602, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.342374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1096602, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.342374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1096602, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.342374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096375, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.297951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096375, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.297951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096375, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.297951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1096605, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3433328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1096605, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3433328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1096605, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3433328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1096366, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2958336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1096366, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2958336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1096366, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2958336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1096574, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3363328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1096574, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3363328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1096574, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3363328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1096594, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3403327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1096594, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3403327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1096594, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3403327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096348, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2925863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096348, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2925863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096348, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2925863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096363, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2951524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096363, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2951524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.736992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096363, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2951524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096376, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2983947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096376, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2983947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096376, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2983947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1096577, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3372967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1096577, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3372967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1096577, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3372967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1096600, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3418603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1096600, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3418603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1096600, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3418603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096372, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2974749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096372, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2974749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096372, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2974749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1096584, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.339939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1096584, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.339939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1096584, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.339939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1096612, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3443327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1096612, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3443327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1096612, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3443327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1096576, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3363328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1096576, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3363328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1096576, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3363328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1096560, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3349888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1096560, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3349888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1096560, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3349888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1096557, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3333972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1096557, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3333972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1096578, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3380773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1096557, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3333972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1096578, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3380773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1096551, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3333972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1096578, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3380773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1096551, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3333972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1096596, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3414044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1096596, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3414044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1096551, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3333972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1096369, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2964137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1096596, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3414044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1096369, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2964137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096744, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.374651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096744, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.374651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1096369, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.2964137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096634, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3568776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096634, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3568776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096744, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.374651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096624, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3479137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096624, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3479137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096634, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3568776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1096671, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3601725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1096671, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3601725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096624, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3479137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096618, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3460097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096618, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3460097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1096671, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3601725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096711, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.369409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096711, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.369409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096618, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3460097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096675, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3672507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096675, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3672507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096711, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.369409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1096721, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1096721, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096675, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3672507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096737, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3735585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096737, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3735585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1096721, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3696485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1096708, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3683627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1096708, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3683627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096737, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3735585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096661, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3585324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096661, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3585324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1096708, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3683627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096632, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.350333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096632, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.350333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096661, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3585324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096654, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3577483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096654, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3577483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096632, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.350333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096627, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.350333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096627, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.350333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096654, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3577483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1096665, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3589869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1096665, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3589869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096627, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.350333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096730, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3727303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096730, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3727303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1096665, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3589869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096724, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.371462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096724, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.371462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096730, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3727303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096620, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3460097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096620, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3460097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096724, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.371462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096621, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.346645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096621, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.346645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096620, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3460097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096704, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3677542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096704, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3677542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096621, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.346645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1096723, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3703332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1096723, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3703332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096704, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3677542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1096723, 'dev': 119, 'nlink': 1, 'atime': 1774828949.0, 'mtime': 1774828949.0, 'ctime': 1774829831.3703332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-30 01:07:52.737825 | orchestrator | 2026-03-30 01:07:52.737833 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-30 01:07:52.737841 | orchestrator | Monday 30 March 2026 01:06:48 +0000 (0:00:38.259) 0:00:51.368 ********** 2026-03-30 01:07:52.737852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.737857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.737862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-30 01:07:52.737877 | orchestrator | 2026-03-30 01:07:52.737882 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-30 01:07:52.737887 | orchestrator | Monday 30 March 2026 01:06:49 +0000 (0:00:01.631) 0:00:53.000 ********** 2026-03-30 01:07:52.737892 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.737898 | orchestrator | 2026-03-30 01:07:52.737903 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-30 01:07:52.737907 | orchestrator | Monday 30 March 2026 01:06:52 +0000 (0:00:02.306) 0:00:55.307 ********** 2026-03-30 01:07:52.737912 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.737917 | orchestrator | 2026-03-30 01:07:52.737921 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-30 01:07:52.737925 | orchestrator | Monday 30 March 2026 01:06:54 +0000 (0:00:02.348) 0:00:57.656 ********** 2026-03-30 01:07:52.737929 | orchestrator | 2026-03-30 01:07:52.737933 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-30 01:07:52.737938 | orchestrator | Monday 30 March 2026 01:06:54 +0000 (0:00:00.060) 0:00:57.716 ********** 2026-03-30 01:07:52.737942 | orchestrator | 2026-03-30 01:07:52.737946 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-30 01:07:52.737950 | orchestrator | Monday 30 March 2026 01:06:54 +0000 (0:00:00.058) 0:00:57.775 ********** 2026-03-30 01:07:52.737954 | orchestrator | 2026-03-30 01:07:52.737958 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-30 01:07:52.737963 | orchestrator | Monday 30 March 2026 01:06:54 +0000 (0:00:00.061) 0:00:57.836 ********** 2026-03-30 01:07:52.737967 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.737974 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.737980 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.737989 | orchestrator | 2026-03-30 01:07:52.737998 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-30 01:07:52.738005 | orchestrator | Monday 30 March 2026 01:06:56 +0000 (0:00:01.752) 0:00:59.589 ********** 2026-03-30 01:07:52.738011 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.738065 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.738069 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-30 01:07:52.738074 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-30 01:07:52.738078 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.738082 | orchestrator | 2026-03-30 01:07:52.738087 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-30 01:07:52.738091 | orchestrator | Monday 30 March 2026 01:07:22 +0000 (0:00:26.233) 0:01:25.822 ********** 2026-03-30 01:07:52.738095 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.738099 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.738103 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.738107 | orchestrator | 2026-03-30 01:07:52.738112 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-30 01:07:52.738116 | orchestrator | Monday 30 March 2026 01:07:44 +0000 (0:00:21.802) 0:01:47.624 ********** 2026-03-30 01:07:52.738120 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.738124 | orchestrator | 2026-03-30 01:07:52.738128 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-30 01:07:52.738132 | orchestrator | Monday 30 March 2026 01:07:47 +0000 (0:00:02.631) 0:01:50.256 ********** 2026-03-30 01:07:52.738136 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.738145 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.738149 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.738153 | orchestrator | 2026-03-30 01:07:52.738158 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-30 01:07:52.738164 | orchestrator | Monday 30 March 2026 01:07:47 +0000 (0:00:00.256) 0:01:50.512 ********** 2026-03-30 01:07:52.738169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-30 01:07:52.738175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-30 01:07:52.738180 | orchestrator | 2026-03-30 01:07:52.738184 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-30 01:07:52.738188 | orchestrator | Monday 30 March 2026 01:07:49 +0000 (0:00:02.552) 0:01:53.065 ********** 2026-03-30 01:07:52.738192 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.738196 | orchestrator | 2026-03-30 01:07:52.738200 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:07:52.738205 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:07:52.738209 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:07:52.738214 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:07:52.738218 | orchestrator | 2026-03-30 01:07:52.738222 | orchestrator | 2026-03-30 01:07:52.738226 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:07:52.738230 | orchestrator | Monday 30 March 2026 01:07:50 +0000 (0:00:00.242) 0:01:53.308 ********** 2026-03-30 01:07:52.738234 | orchestrator | =============================================================================== 2026-03-30 01:07:52.738238 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.26s 2026-03-30 01:07:52.738243 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.23s 2026-03-30 01:07:52.738247 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 21.80s 2026-03-30 01:07:52.738251 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.63s 2026-03-30 01:07:52.738255 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.55s 2026-03-30 01:07:52.738259 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2026-03-30 01:07:52.738264 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.31s 2026-03-30 01:07:52.738268 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.75s 2026-03-30 01:07:52.738272 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.63s 2026-03-30 01:07:52.738276 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.45s 2026-03-30 01:07:52.738280 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.41s 2026-03-30 01:07:52.738284 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.23s 2026-03-30 01:07:52.738293 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.16s 2026-03-30 01:07:52.738297 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.09s 2026-03-30 01:07:52.738301 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.97s 2026-03-30 01:07:52.738308 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2026-03-30 01:07:52.738312 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.82s 2026-03-30 01:07:52.738317 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.69s 2026-03-30 01:07:52.738321 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.60s 2026-03-30 01:07:52.738325 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.50s 2026-03-30 01:07:52.739388 | orchestrator | 2026-03-30 01:07:52 | INFO  | Task 26067941-ce54-4759-91c7-3d320d245518 is in state SUCCESS 2026-03-30 01:07:52.741288 | orchestrator | 2026-03-30 01:07:52.741343 | orchestrator | 2026-03-30 01:07:52.741348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:07:52.741369 | orchestrator | 2026-03-30 01:07:52.741374 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-30 01:07:52.741378 | orchestrator | Monday 30 March 2026 00:59:05 +0000 (0:00:00.404) 0:00:00.404 ********** 2026-03-30 01:07:52.741382 | orchestrator | changed: [testbed-manager] 2026-03-30 01:07:52.741387 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741391 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.741396 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.741400 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.741404 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.741408 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.741413 | orchestrator | 2026-03-30 01:07:52.741417 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:07:52.741440 | orchestrator | Monday 30 March 2026 00:59:06 +0000 (0:00:00.647) 0:00:01.052 ********** 2026-03-30 01:07:52.741445 | orchestrator | changed: [testbed-manager] 2026-03-30 01:07:52.741449 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741453 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.741457 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.741461 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.741466 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.741470 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.741474 | orchestrator | 2026-03-30 01:07:52.741478 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:07:52.741482 | orchestrator | Monday 30 March 2026 00:59:07 +0000 (0:00:00.864) 0:00:01.916 ********** 2026-03-30 01:07:52.741487 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-30 01:07:52.741491 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-30 01:07:52.741495 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-30 01:07:52.741499 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-30 01:07:52.741504 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-30 01:07:52.741508 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-30 01:07:52.741512 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-30 01:07:52.741516 | orchestrator | 2026-03-30 01:07:52.741520 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-30 01:07:52.741524 | orchestrator | 2026-03-30 01:07:52.741528 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-30 01:07:52.741532 | orchestrator | Monday 30 March 2026 00:59:07 +0000 (0:00:00.653) 0:00:02.569 ********** 2026-03-30 01:07:52.741536 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.741540 | orchestrator | 2026-03-30 01:07:52.741544 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-30 01:07:52.741549 | orchestrator | Monday 30 March 2026 00:59:08 +0000 (0:00:00.643) 0:00:03.213 ********** 2026-03-30 01:07:52.741553 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-30 01:07:52.741565 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-30 01:07:52.741569 | orchestrator | 2026-03-30 01:07:52.741574 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-30 01:07:52.741578 | orchestrator | Monday 30 March 2026 00:59:13 +0000 (0:00:04.587) 0:00:07.801 ********** 2026-03-30 01:07:52.741582 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 01:07:52.741586 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-30 01:07:52.741590 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741594 | orchestrator | 2026-03-30 01:07:52.741598 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-30 01:07:52.741603 | orchestrator | Monday 30 March 2026 00:59:17 +0000 (0:00:04.687) 0:00:12.488 ********** 2026-03-30 01:07:52.741607 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741611 | orchestrator | 2026-03-30 01:07:52.741615 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-30 01:07:52.741619 | orchestrator | Monday 30 March 2026 00:59:18 +0000 (0:00:01.117) 0:00:13.606 ********** 2026-03-30 01:07:52.741623 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741627 | orchestrator | 2026-03-30 01:07:52.741631 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-30 01:07:52.741636 | orchestrator | Monday 30 March 2026 00:59:20 +0000 (0:00:01.436) 0:00:15.042 ********** 2026-03-30 01:07:52.741640 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741644 | orchestrator | 2026-03-30 01:07:52.741648 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-30 01:07:52.741652 | orchestrator | Monday 30 March 2026 00:59:24 +0000 (0:00:03.677) 0:00:18.720 ********** 2026-03-30 01:07:52.741657 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.741661 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.741665 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.741669 | orchestrator | 2026-03-30 01:07:52.741673 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-30 01:07:52.741677 | orchestrator | Monday 30 March 2026 00:59:24 +0000 (0:00:00.676) 0:00:19.399 ********** 2026-03-30 01:07:52.741681 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.741686 | orchestrator | 2026-03-30 01:07:52.741690 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-30 01:07:52.741694 | orchestrator | Monday 30 March 2026 00:59:59 +0000 (0:00:35.012) 0:00:54.412 ********** 2026-03-30 01:07:52.741698 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741702 | orchestrator | 2026-03-30 01:07:52.741706 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-30 01:07:52.741710 | orchestrator | Monday 30 March 2026 01:00:16 +0000 (0:00:16.331) 0:01:10.744 ********** 2026-03-30 01:07:52.741715 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.741719 | orchestrator | 2026-03-30 01:07:52.741723 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-30 01:07:52.741727 | orchestrator | Monday 30 March 2026 01:00:29 +0000 (0:00:12.961) 0:01:23.706 ********** 2026-03-30 01:07:52.741737 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.741741 | orchestrator | 2026-03-30 01:07:52.741745 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-30 01:07:52.741757 | orchestrator | Monday 30 March 2026 01:00:29 +0000 (0:00:00.573) 0:01:24.280 ********** 2026-03-30 01:07:52.741762 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.741770 | orchestrator | 2026-03-30 01:07:52.741774 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-30 01:07:52.741785 | orchestrator | Monday 30 March 2026 01:00:30 +0000 (0:00:00.397) 0:01:24.678 ********** 2026-03-30 01:07:52.741789 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.741794 | orchestrator | 2026-03-30 01:07:52.741798 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-30 01:07:52.741802 | orchestrator | Monday 30 March 2026 01:00:30 +0000 (0:00:00.565) 0:01:25.243 ********** 2026-03-30 01:07:52.741837 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.741848 | orchestrator | 2026-03-30 01:07:52.741853 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-30 01:07:52.741857 | orchestrator | Monday 30 March 2026 01:00:50 +0000 (0:00:19.727) 0:01:44.970 ********** 2026-03-30 01:07:52.741861 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.741865 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.741870 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.741874 | orchestrator | 2026-03-30 01:07:52.741878 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-30 01:07:52.741882 | orchestrator | 2026-03-30 01:07:52.741886 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-30 01:07:52.741893 | orchestrator | Monday 30 March 2026 01:00:50 +0000 (0:00:00.302) 0:01:45.273 ********** 2026-03-30 01:07:52.741900 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.741909 | orchestrator | 2026-03-30 01:07:52.741918 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-30 01:07:52.741925 | orchestrator | Monday 30 March 2026 01:00:51 +0000 (0:00:00.655) 0:01:45.929 ********** 2026-03-30 01:07:52.741931 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.741937 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.741944 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741951 | orchestrator | 2026-03-30 01:07:52.741958 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-30 01:07:52.741964 | orchestrator | Monday 30 March 2026 01:00:53 +0000 (0:00:02.033) 0:01:47.963 ********** 2026-03-30 01:07:52.741971 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.741978 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.741985 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.741992 | orchestrator | 2026-03-30 01:07:52.741999 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-30 01:07:52.742006 | orchestrator | Monday 30 March 2026 01:00:56 +0000 (0:00:02.811) 0:01:50.775 ********** 2026-03-30 01:07:52.742040 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742050 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742056 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742063 | orchestrator | 2026-03-30 01:07:52.742071 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-30 01:07:52.742078 | orchestrator | Monday 30 March 2026 01:00:57 +0000 (0:00:01.170) 0:01:51.945 ********** 2026-03-30 01:07:52.742085 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-30 01:07:52.742092 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742100 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-30 01:07:52.742107 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742114 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-30 01:07:52.742122 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-30 01:07:52.742130 | orchestrator | 2026-03-30 01:07:52.742137 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-30 01:07:52.742143 | orchestrator | Monday 30 March 2026 01:01:05 +0000 (0:00:07.876) 0:01:59.822 ********** 2026-03-30 01:07:52.742150 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742157 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742163 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742170 | orchestrator | 2026-03-30 01:07:52.742177 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-30 01:07:52.742184 | orchestrator | Monday 30 March 2026 01:01:05 +0000 (0:00:00.395) 0:02:00.218 ********** 2026-03-30 01:07:52.742190 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-30 01:07:52.742195 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742200 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-30 01:07:52.742211 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742216 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-30 01:07:52.742221 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742226 | orchestrator | 2026-03-30 01:07:52.742230 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-30 01:07:52.742234 | orchestrator | Monday 30 March 2026 01:01:06 +0000 (0:00:01.095) 0:02:01.313 ********** 2026-03-30 01:07:52.742238 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742243 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742247 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.742251 | orchestrator | 2026-03-30 01:07:52.742255 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-30 01:07:52.742259 | orchestrator | Monday 30 March 2026 01:01:07 +0000 (0:00:00.560) 0:02:01.874 ********** 2026-03-30 01:07:52.742263 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742267 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742271 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.742275 | orchestrator | 2026-03-30 01:07:52.742291 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-30 01:07:52.742296 | orchestrator | Monday 30 March 2026 01:01:08 +0000 (0:00:01.194) 0:02:03.069 ********** 2026-03-30 01:07:52.742300 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742304 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742313 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.742317 | orchestrator | 2026-03-30 01:07:52.742322 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-30 01:07:52.742326 | orchestrator | Monday 30 March 2026 01:01:11 +0000 (0:00:03.418) 0:02:06.487 ********** 2026-03-30 01:07:52.742330 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742334 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742338 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.742342 | orchestrator | 2026-03-30 01:07:52.742347 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-30 01:07:52.742351 | orchestrator | Monday 30 March 2026 01:01:33 +0000 (0:00:21.289) 0:02:27.777 ********** 2026-03-30 01:07:52.742375 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742379 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742390 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.742395 | orchestrator | 2026-03-30 01:07:52.742399 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-30 01:07:52.742406 | orchestrator | Monday 30 March 2026 01:01:47 +0000 (0:00:14.307) 0:02:42.084 ********** 2026-03-30 01:07:52.742410 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.742415 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742453 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742461 | orchestrator | 2026-03-30 01:07:52.742468 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-30 01:07:52.742475 | orchestrator | Monday 30 March 2026 01:01:48 +0000 (0:00:00.835) 0:02:42.920 ********** 2026-03-30 01:07:52.742481 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742488 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742494 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.742502 | orchestrator | 2026-03-30 01:07:52.742509 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-30 01:07:52.742516 | orchestrator | Monday 30 March 2026 01:02:01 +0000 (0:00:13.551) 0:02:56.472 ********** 2026-03-30 01:07:52.742522 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742528 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742534 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742541 | orchestrator | 2026-03-30 01:07:52.742548 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-30 01:07:52.742555 | orchestrator | Monday 30 March 2026 01:02:03 +0000 (0:00:01.496) 0:02:57.969 ********** 2026-03-30 01:07:52.742562 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742571 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742575 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.742579 | orchestrator | 2026-03-30 01:07:52.742583 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-30 01:07:52.742587 | orchestrator | 2026-03-30 01:07:52.742591 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-30 01:07:52.742596 | orchestrator | Monday 30 March 2026 01:02:03 +0000 (0:00:00.658) 0:02:58.627 ********** 2026-03-30 01:07:52.742600 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.742604 | orchestrator | 2026-03-30 01:07:52.742609 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-30 01:07:52.742613 | orchestrator | Monday 30 March 2026 01:02:05 +0000 (0:00:01.063) 0:02:59.690 ********** 2026-03-30 01:07:52.742617 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-30 01:07:52.742621 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-30 01:07:52.742628 | orchestrator | 2026-03-30 01:07:52.742635 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-30 01:07:52.742642 | orchestrator | Monday 30 March 2026 01:02:08 +0000 (0:00:03.764) 0:03:03.455 ********** 2026-03-30 01:07:52.742648 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-30 01:07:52.742653 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-30 01:07:52.742659 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-30 01:07:52.742666 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-30 01:07:52.742674 | orchestrator | 2026-03-30 01:07:52.742681 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-30 01:07:52.742685 | orchestrator | Monday 30 March 2026 01:02:15 +0000 (0:00:06.528) 0:03:09.983 ********** 2026-03-30 01:07:52.742689 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:07:52.742693 | orchestrator | 2026-03-30 01:07:52.742698 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-30 01:07:52.742702 | orchestrator | Monday 30 March 2026 01:02:19 +0000 (0:00:03.814) 0:03:13.798 ********** 2026-03-30 01:07:52.742706 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-30 01:07:52.742710 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:07:52.742714 | orchestrator | 2026-03-30 01:07:52.742718 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-30 01:07:52.742722 | orchestrator | Monday 30 March 2026 01:02:23 +0000 (0:00:04.626) 0:03:18.424 ********** 2026-03-30 01:07:52.742727 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:07:52.742731 | orchestrator | 2026-03-30 01:07:52.742735 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-30 01:07:52.742739 | orchestrator | Monday 30 March 2026 01:02:27 +0000 (0:00:03.510) 0:03:21.934 ********** 2026-03-30 01:07:52.742743 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-30 01:07:52.742747 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-30 01:07:52.742751 | orchestrator | 2026-03-30 01:07:52.742763 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-30 01:07:52.742771 | orchestrator | Monday 30 March 2026 01:02:36 +0000 (0:00:08.751) 0:03:30.685 ********** 2026-03-30 01:07:52.742781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.742819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.742825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.742839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.742882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.742897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.742905 | orchestrator | 2026-03-30 01:07:52.742912 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-30 01:07:52.742920 | orchestrator | Monday 30 March 2026 01:02:39 +0000 (0:00:03.060) 0:03:33.746 ********** 2026-03-30 01:07:52.742925 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742929 | orchestrator | 2026-03-30 01:07:52.742933 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-30 01:07:52.742937 | orchestrator | Monday 30 March 2026 01:02:39 +0000 (0:00:00.266) 0:03:34.013 ********** 2026-03-30 01:07:52.742953 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.742960 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.742967 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.743013 | orchestrator | 2026-03-30 01:07:52.743023 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-30 01:07:52.743042 | orchestrator | Monday 30 March 2026 01:02:40 +0000 (0:00:00.748) 0:03:34.761 ********** 2026-03-30 01:07:52.743058 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-30 01:07:52.743066 | orchestrator | 2026-03-30 01:07:52.743073 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-30 01:07:52.743080 | orchestrator | Monday 30 March 2026 01:02:41 +0000 (0:00:01.430) 0:03:36.192 ********** 2026-03-30 01:07:52.743087 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.743094 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.743101 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.743108 | orchestrator | 2026-03-30 01:07:52.743115 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-30 01:07:52.743133 | orchestrator | Monday 30 March 2026 01:02:41 +0000 (0:00:00.346) 0:03:36.539 ********** 2026-03-30 01:07:52.743139 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.743146 | orchestrator | 2026-03-30 01:07:52.743153 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-30 01:07:52.743159 | orchestrator | Monday 30 March 2026 01:02:42 +0000 (0:00:00.599) 0:03:37.138 ********** 2026-03-30 01:07:52.743174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743249 | orchestrator | 2026-03-30 01:07:52.743253 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-30 01:07:52.743260 | orchestrator | Monday 30 March 2026 01:02:45 +0000 (0:00:02.737) 0:03:39.876 ********** 2026-03-30 01:07:52.743269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743285 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.743292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743310 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.743320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743330 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.743334 | orchestrator | 2026-03-30 01:07:52.743338 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-30 01:07:52.743343 | orchestrator | Monday 30 March 2026 01:02:45 +0000 (0:00:00.577) 0:03:40.454 ********** 2026-03-30 01:07:52.743351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743376 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.743391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743408 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.743413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743451 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.743459 | orchestrator | 2026-03-30 01:07:52.743466 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-30 01:07:52.743474 | orchestrator | Monday 30 March 2026 01:02:46 +0000 (0:00:00.972) 0:03:41.426 ********** 2026-03-30 01:07:52.743483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743525 | orchestrator | 2026-03-30 01:07:52.743533 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-30 01:07:52.743538 | orchestrator | Monday 30 March 2026 01:02:49 +0000 (0:00:03.168) 0:03:44.594 ********** 2026-03-30 01:07:52.743543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743582 | orchestrator | 2026-03-30 01:07:52.743586 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-30 01:07:52.743590 | orchestrator | Monday 30 March 2026 01:02:58 +0000 (0:00:08.641) 0:03:53.236 ********** 2026-03-30 01:07:52.743595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743619 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.743626 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.743631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-30 01:07:52.743659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.743664 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.743669 | orchestrator | 2026-03-30 01:07:52.743673 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-30 01:07:52.743677 | orchestrator | Monday 30 March 2026 01:03:00 +0000 (0:00:01.489) 0:03:54.725 ********** 2026-03-30 01:07:52.743682 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.743686 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.743690 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.743694 | orchestrator | 2026-03-30 01:07:52.743702 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-30 01:07:52.743706 | orchestrator | Monday 30 March 2026 01:03:03 +0000 (0:00:03.441) 0:03:58.167 ********** 2026-03-30 01:07:52.743710 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.743714 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.743718 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.743723 | orchestrator | 2026-03-30 01:07:52.743727 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-30 01:07:52.743731 | orchestrator | Monday 30 March 2026 01:03:04 +0000 (0:00:00.746) 0:03:58.914 ********** 2026-03-30 01:07:52.743738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-30 01:07:52.743765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.743797 | orchestrator | 2026-03-30 01:07:52.743804 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-30 01:07:52.743812 | orchestrator | Monday 30 March 2026 01:03:06 +0000 (0:00:02.717) 0:04:01.631 ********** 2026-03-30 01:07:52.743819 | orchestrator | 2026-03-30 01:07:52.743826 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-30 01:07:52.743833 | orchestrator | Monday 30 March 2026 01:03:07 +0000 (0:00:00.489) 0:04:02.121 ********** 2026-03-30 01:07:52.743841 | orchestrator | 2026-03-30 01:07:52.743848 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-30 01:07:52.743853 | orchestrator | Monday 30 March 2026 01:03:07 +0000 (0:00:00.246) 0:04:02.367 ********** 2026-03-30 01:07:52.743857 | orchestrator | 2026-03-30 01:07:52.743862 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-30 01:07:52.743866 | orchestrator | Monday 30 March 2026 01:03:08 +0000 (0:00:00.442) 0:04:02.809 ********** 2026-03-30 01:07:52.743870 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.743874 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.743878 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.743882 | orchestrator | 2026-03-30 01:07:52.743887 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-30 01:07:52.743895 | orchestrator | Monday 30 March 2026 01:03:31 +0000 (0:00:23.027) 0:04:25.836 ********** 2026-03-30 01:07:52.743902 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.743910 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.743917 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.743925 | orchestrator | 2026-03-30 01:07:52.743931 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-30 01:07:52.743939 | orchestrator | 2026-03-30 01:07:52.743945 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-30 01:07:52.743953 | orchestrator | Monday 30 March 2026 01:03:38 +0000 (0:00:07.127) 0:04:32.964 ********** 2026-03-30 01:07:52.743960 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.743967 | orchestrator | 2026-03-30 01:07:52.743974 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-30 01:07:52.743982 | orchestrator | Monday 30 March 2026 01:03:40 +0000 (0:00:02.063) 0:04:35.027 ********** 2026-03-30 01:07:52.743989 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.743996 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.744004 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.744011 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.744016 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.744021 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.744026 | orchestrator | 2026-03-30 01:07:52.744033 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-30 01:07:52.744041 | orchestrator | Monday 30 March 2026 01:03:41 +0000 (0:00:00.864) 0:04:35.891 ********** 2026-03-30 01:07:52.744048 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.744055 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.744063 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.744070 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:07:52.744078 | orchestrator | 2026-03-30 01:07:52.744085 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-30 01:07:52.744104 | orchestrator | Monday 30 March 2026 01:03:42 +0000 (0:00:01.705) 0:04:37.597 ********** 2026-03-30 01:07:52.744113 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-30 01:07:52.744120 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-30 01:07:52.744127 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-30 01:07:52.744133 | orchestrator | 2026-03-30 01:07:52.744141 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-30 01:07:52.744148 | orchestrator | Monday 30 March 2026 01:03:44 +0000 (0:00:01.145) 0:04:38.743 ********** 2026-03-30 01:07:52.744155 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-30 01:07:52.744162 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-30 01:07:52.744170 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-30 01:07:52.744177 | orchestrator | 2026-03-30 01:07:52.744184 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-30 01:07:52.744195 | orchestrator | Monday 30 March 2026 01:03:45 +0000 (0:00:01.898) 0:04:40.641 ********** 2026-03-30 01:07:52.744202 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-30 01:07:52.744208 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.744213 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-30 01:07:52.744217 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.744221 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-30 01:07:52.744225 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.744229 | orchestrator | 2026-03-30 01:07:52.744233 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-30 01:07:52.744238 | orchestrator | Monday 30 March 2026 01:03:46 +0000 (0:00:00.823) 0:04:41.465 ********** 2026-03-30 01:07:52.744242 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 01:07:52.744247 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 01:07:52.744251 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.744256 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 01:07:52.744263 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 01:07:52.744270 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.744278 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-30 01:07:52.744282 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-30 01:07:52.744286 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.744291 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-30 01:07:52.744295 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-30 01:07:52.744299 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-30 01:07:52.744303 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-30 01:07:52.744307 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-30 01:07:52.744311 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-30 01:07:52.744315 | orchestrator | 2026-03-30 01:07:52.744319 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-30 01:07:52.744324 | orchestrator | Monday 30 March 2026 01:03:47 +0000 (0:00:01.135) 0:04:42.601 ********** 2026-03-30 01:07:52.744328 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.744332 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.744337 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.744344 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.744349 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.744353 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.744362 | orchestrator | 2026-03-30 01:07:52.744366 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-30 01:07:52.744370 | orchestrator | Monday 30 March 2026 01:03:49 +0000 (0:00:01.539) 0:04:44.141 ********** 2026-03-30 01:07:52.744374 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.744378 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.744383 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.744387 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.744391 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.744395 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.744399 | orchestrator | 2026-03-30 01:07:52.744403 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-30 01:07:52.744407 | orchestrator | Monday 30 March 2026 01:03:51 +0000 (0:00:01.734) 0:04:45.875 ********** 2026-03-30 01:07:52.744412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744633 | orchestrator | 2026-03-30 01:07:52.744638 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-30 01:07:52.744642 | orchestrator | Monday 30 March 2026 01:03:53 +0000 (0:00:02.142) 0:04:48.017 ********** 2026-03-30 01:07:52.744647 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:07:52.744651 | orchestrator | 2026-03-30 01:07:52.744656 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-30 01:07:52.744660 | orchestrator | Monday 30 March 2026 01:03:54 +0000 (0:00:01.257) 0:04:49.274 ********** 2026-03-30 01:07:52.744664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.744826 | orchestrator | 2026-03-30 01:07:52.744834 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-30 01:07:52.744841 | orchestrator | Monday 30 March 2026 01:03:57 +0000 (0:00:03.312) 0:04:52.587 ********** 2026-03-30 01:07:52.744853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.744862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.744870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.744875 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.744880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.744884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.744891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.744895 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.744904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.744911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.744916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.744920 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.744925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.744929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.744933 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.744940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.744947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.744956 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.744961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.744966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.744970 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.744974 | orchestrator | 2026-03-30 01:07:52.744978 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-30 01:07:52.744983 | orchestrator | Monday 30 March 2026 01:04:00 +0000 (0:00:02.500) 0:04:55.087 ********** 2026-03-30 01:07:52.744987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.744992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.744999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.745003 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.745017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.745022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.745026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.745031 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.745046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.745055 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.745065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.745069 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.745073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.745078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.745082 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.745087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.745094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.745101 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.745105 | orchestrator | 2026-03-30 01:07:52.745109 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-30 01:07:52.745114 | orchestrator | Monday 30 March 2026 01:04:02 +0000 (0:00:02.377) 0:04:57.464 ********** 2026-03-30 01:07:52.745118 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.745122 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.745126 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.745133 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:07:52.745138 | orchestrator | 2026-03-30 01:07:52.745142 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-30 01:07:52.745146 | orchestrator | Monday 30 March 2026 01:04:03 +0000 (0:00:00.987) 0:04:58.451 ********** 2026-03-30 01:07:52.745150 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 01:07:52.745155 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-30 01:07:52.745159 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-30 01:07:52.745163 | orchestrator | 2026-03-30 01:07:52.745167 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-30 01:07:52.745172 | orchestrator | Monday 30 March 2026 01:04:04 +0000 (0:00:00.865) 0:04:59.317 ********** 2026-03-30 01:07:52.745176 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 01:07:52.745180 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-30 01:07:52.745184 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-30 01:07:52.745188 | orchestrator | 2026-03-30 01:07:52.745192 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-30 01:07:52.745197 | orchestrator | Monday 30 March 2026 01:04:05 +0000 (0:00:01.077) 0:05:00.394 ********** 2026-03-30 01:07:52.745201 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:07:52.745205 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:07:52.745209 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:07:52.745213 | orchestrator | 2026-03-30 01:07:52.745217 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-30 01:07:52.745222 | orchestrator | Monday 30 March 2026 01:04:06 +0000 (0:00:00.573) 0:05:00.968 ********** 2026-03-30 01:07:52.745226 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:07:52.745230 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:07:52.745234 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:07:52.745238 | orchestrator | 2026-03-30 01:07:52.745243 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-30 01:07:52.745247 | orchestrator | Monday 30 March 2026 01:04:06 +0000 (0:00:00.457) 0:05:01.425 ********** 2026-03-30 01:07:52.745251 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-30 01:07:52.745255 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-30 01:07:52.745259 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-30 01:07:52.745263 | orchestrator | 2026-03-30 01:07:52.745268 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-30 01:07:52.745272 | orchestrator | Monday 30 March 2026 01:04:07 +0000 (0:00:01.165) 0:05:02.591 ********** 2026-03-30 01:07:52.745276 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-30 01:07:52.745280 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-30 01:07:52.745284 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-30 01:07:52.745288 | orchestrator | 2026-03-30 01:07:52.745293 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-30 01:07:52.745297 | orchestrator | Monday 30 March 2026 01:04:09 +0000 (0:00:01.630) 0:05:04.221 ********** 2026-03-30 01:07:52.745304 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-30 01:07:52.745308 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-30 01:07:52.745312 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-30 01:07:52.745316 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-30 01:07:52.745321 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-30 01:07:52.745325 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-30 01:07:52.745329 | orchestrator | 2026-03-30 01:07:52.745333 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-30 01:07:52.745338 | orchestrator | Monday 30 March 2026 01:04:14 +0000 (0:00:04.695) 0:05:08.917 ********** 2026-03-30 01:07:52.745342 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745346 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745350 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745354 | orchestrator | 2026-03-30 01:07:52.745359 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-30 01:07:52.745363 | orchestrator | Monday 30 March 2026 01:04:14 +0000 (0:00:00.310) 0:05:09.228 ********** 2026-03-30 01:07:52.745367 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745371 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745375 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745379 | orchestrator | 2026-03-30 01:07:52.745384 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-30 01:07:52.745388 | orchestrator | Monday 30 March 2026 01:04:14 +0000 (0:00:00.311) 0:05:09.539 ********** 2026-03-30 01:07:52.745392 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.745396 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.745400 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.745404 | orchestrator | 2026-03-30 01:07:52.745408 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-30 01:07:52.745413 | orchestrator | Monday 30 March 2026 01:04:16 +0000 (0:00:01.282) 0:05:10.822 ********** 2026-03-30 01:07:52.745433 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-30 01:07:52.745442 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-30 01:07:52.745447 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-30 01:07:52.745451 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-30 01:07:52.745455 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-30 01:07:52.745462 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-30 01:07:52.745466 | orchestrator | 2026-03-30 01:07:52.745470 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-30 01:07:52.745475 | orchestrator | Monday 30 March 2026 01:04:19 +0000 (0:00:03.144) 0:05:13.966 ********** 2026-03-30 01:07:52.745479 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 01:07:52.745483 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 01:07:52.745487 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 01:07:52.745491 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-30 01:07:52.745495 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.745499 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-30 01:07:52.745504 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.745508 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-30 01:07:52.745521 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.745526 | orchestrator | 2026-03-30 01:07:52.745530 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-30 01:07:52.745534 | orchestrator | Monday 30 March 2026 01:04:22 +0000 (0:00:03.468) 0:05:17.435 ********** 2026-03-30 01:07:52.745538 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.745542 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.745546 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.745551 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-30 01:07:52.745555 | orchestrator | 2026-03-30 01:07:52.745559 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-30 01:07:52.745563 | orchestrator | Monday 30 March 2026 01:04:25 +0000 (0:00:02.298) 0:05:19.733 ********** 2026-03-30 01:07:52.745568 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 01:07:52.745572 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-30 01:07:52.745576 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-30 01:07:52.745580 | orchestrator | 2026-03-30 01:07:52.745584 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-30 01:07:52.745588 | orchestrator | Monday 30 March 2026 01:04:25 +0000 (0:00:00.873) 0:05:20.607 ********** 2026-03-30 01:07:52.745592 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745597 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745601 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745605 | orchestrator | 2026-03-30 01:07:52.745609 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-30 01:07:52.745613 | orchestrator | Monday 30 March 2026 01:04:26 +0000 (0:00:00.282) 0:05:20.889 ********** 2026-03-30 01:07:52.745617 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745621 | orchestrator | 2026-03-30 01:07:52.745625 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-30 01:07:52.745629 | orchestrator | Monday 30 March 2026 01:04:26 +0000 (0:00:00.121) 0:05:21.011 ********** 2026-03-30 01:07:52.745634 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745638 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745642 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745646 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.745650 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.745655 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.745659 | orchestrator | 2026-03-30 01:07:52.745663 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-30 01:07:52.745667 | orchestrator | Monday 30 March 2026 01:04:27 +0000 (0:00:00.708) 0:05:21.719 ********** 2026-03-30 01:07:52.745671 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-30 01:07:52.745676 | orchestrator | 2026-03-30 01:07:52.745680 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-30 01:07:52.745684 | orchestrator | Monday 30 March 2026 01:04:27 +0000 (0:00:00.688) 0:05:22.408 ********** 2026-03-30 01:07:52.745688 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745692 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745696 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745701 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.745705 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.745709 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.745713 | orchestrator | 2026-03-30 01:07:52.745717 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-30 01:07:52.745721 | orchestrator | Monday 30 March 2026 01:04:28 +0000 (0:00:00.581) 0:05:22.990 ********** 2026-03-30 01:07:52.745730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745820 | orchestrator | 2026-03-30 01:07:52.745825 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-30 01:07:52.745829 | orchestrator | Monday 30 March 2026 01:04:32 +0000 (0:00:04.478) 0:05:27.468 ********** 2026-03-30 01:07:52.745834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.745838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.745842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.745852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.745858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.745863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.745867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.745917 | orchestrator | 2026-03-30 01:07:52.745921 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-30 01:07:52.745926 | orchestrator | Monday 30 March 2026 01:04:38 +0000 (0:00:05.669) 0:05:33.138 ********** 2026-03-30 01:07:52.745930 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.745934 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.745938 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.745942 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.745949 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.745953 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.745957 | orchestrator | 2026-03-30 01:07:52.745961 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-30 01:07:52.745966 | orchestrator | Monday 30 March 2026 01:04:41 +0000 (0:00:03.370) 0:05:36.509 ********** 2026-03-30 01:07:52.745970 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-30 01:07:52.745974 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-30 01:07:52.745978 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-30 01:07:52.745982 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-30 01:07:52.745988 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-30 01:07:52.745993 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-30 01:07:52.745997 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-30 01:07:52.746001 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746005 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-30 01:07:52.746009 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746077 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-30 01:07:52.746082 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746086 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-30 01:07:52.746091 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-30 01:07:52.746095 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-30 01:07:52.746099 | orchestrator | 2026-03-30 01:07:52.746103 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-30 01:07:52.746107 | orchestrator | Monday 30 March 2026 01:04:47 +0000 (0:00:05.413) 0:05:41.922 ********** 2026-03-30 01:07:52.746111 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.746116 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.746120 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.746124 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746128 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746132 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746136 | orchestrator | 2026-03-30 01:07:52.746140 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-30 01:07:52.746150 | orchestrator | Monday 30 March 2026 01:04:47 +0000 (0:00:00.585) 0:05:42.507 ********** 2026-03-30 01:07:52.746154 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-30 01:07:52.746158 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-30 01:07:52.746163 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-30 01:07:52.746167 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-30 01:07:52.746171 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-30 01:07:52.746175 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-30 01:07:52.746179 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-30 01:07:52.746183 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-30 01:07:52.746188 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-30 01:07:52.746192 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-30 01:07:52.746196 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-30 01:07:52.746200 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746204 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-30 01:07:52.746208 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746212 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-30 01:07:52.746216 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746220 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-30 01:07:52.746225 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-30 01:07:52.746229 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-30 01:07:52.746236 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-30 01:07:52.746240 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-30 01:07:52.746244 | orchestrator | 2026-03-30 01:07:52.746249 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-30 01:07:52.746253 | orchestrator | Monday 30 March 2026 01:04:54 +0000 (0:00:06.259) 0:05:48.767 ********** 2026-03-30 01:07:52.746257 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-30 01:07:52.746261 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-30 01:07:52.746268 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-30 01:07:52.746273 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-30 01:07:52.746277 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-30 01:07:52.746281 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-30 01:07:52.746285 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-30 01:07:52.746291 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-30 01:07:52.746296 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-30 01:07:52.746300 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-30 01:07:52.746304 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-30 01:07:52.746308 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-30 01:07:52.746312 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-30 01:07:52.746316 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746320 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-30 01:07:52.746324 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-30 01:07:52.746328 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746333 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-30 01:07:52.746337 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-30 01:07:52.746341 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-30 01:07:52.746345 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746349 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-30 01:07:52.746353 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-30 01:07:52.746357 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-30 01:07:52.746361 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-30 01:07:52.746365 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-30 01:07:52.746370 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-30 01:07:52.746374 | orchestrator | 2026-03-30 01:07:52.746378 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-30 01:07:52.746382 | orchestrator | Monday 30 March 2026 01:05:00 +0000 (0:00:06.326) 0:05:55.094 ********** 2026-03-30 01:07:52.746386 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.746390 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.746394 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.746398 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746403 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746407 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746411 | orchestrator | 2026-03-30 01:07:52.746415 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-30 01:07:52.746438 | orchestrator | Monday 30 March 2026 01:05:01 +0000 (0:00:00.604) 0:05:55.699 ********** 2026-03-30 01:07:52.746444 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.746448 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.746452 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.746457 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746461 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746465 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746470 | orchestrator | 2026-03-30 01:07:52.746474 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-30 01:07:52.746478 | orchestrator | Monday 30 March 2026 01:05:01 +0000 (0:00:00.751) 0:05:56.450 ********** 2026-03-30 01:07:52.746482 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746486 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746490 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.746494 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746502 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.746506 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.746511 | orchestrator | 2026-03-30 01:07:52.746515 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-30 01:07:52.746519 | orchestrator | Monday 30 March 2026 01:05:04 +0000 (0:00:02.814) 0:05:59.264 ********** 2026-03-30 01:07:52.746523 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746530 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.746535 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746539 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.746543 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746547 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.746551 | orchestrator | 2026-03-30 01:07:52.746555 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-30 01:07:52.746559 | orchestrator | Monday 30 March 2026 01:05:07 +0000 (0:00:03.058) 0:06:02.323 ********** 2026-03-30 01:07:52.746566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.746571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.746575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.746580 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.746584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.746588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.746597 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.746612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.746617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.746621 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.746626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-30 01:07:52.746630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-30 01:07:52.746639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.746644 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.746650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.746654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.746659 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-30 01:07:52.746667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-30 01:07:52.746671 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746676 | orchestrator | 2026-03-30 01:07:52.746680 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-30 01:07:52.746684 | orchestrator | Monday 30 March 2026 01:05:09 +0000 (0:00:01.784) 0:06:04.108 ********** 2026-03-30 01:07:52.746691 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-30 01:07:52.746696 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-30 01:07:52.746700 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.746704 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-30 01:07:52.746708 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-30 01:07:52.746712 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.746716 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-30 01:07:52.746721 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-30 01:07:52.746725 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.746729 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-30 01:07:52.746733 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-30 01:07:52.746737 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746741 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-30 01:07:52.746745 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-30 01:07:52.746750 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746754 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-30 01:07:52.746758 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-30 01:07:52.746762 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746766 | orchestrator | 2026-03-30 01:07:52.746770 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-30 01:07:52.746775 | orchestrator | Monday 30 March 2026 01:05:10 +0000 (0:00:00.908) 0:06:05.016 ********** 2026-03-30 01:07:52.746784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-30 01:07:52.746868 | orchestrator | 2026-03-30 01:07:52.746873 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-30 01:07:52.746877 | orchestrator | Monday 30 March 2026 01:05:13 +0000 (0:00:03.112) 0:06:08.129 ********** 2026-03-30 01:07:52.746881 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.746885 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.746889 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.746894 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.746898 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.746902 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.746906 | orchestrator | 2026-03-30 01:07:52.746911 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-30 01:07:52.746915 | orchestrator | Monday 30 March 2026 01:05:14 +0000 (0:00:00.823) 0:06:08.952 ********** 2026-03-30 01:07:52.746919 | orchestrator | 2026-03-30 01:07:52.746923 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-30 01:07:52.746927 | orchestrator | Monday 30 March 2026 01:05:14 +0000 (0:00:00.142) 0:06:09.095 ********** 2026-03-30 01:07:52.746931 | orchestrator | 2026-03-30 01:07:52.746936 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-30 01:07:52.746940 | orchestrator | Monday 30 March 2026 01:05:14 +0000 (0:00:00.138) 0:06:09.234 ********** 2026-03-30 01:07:52.746944 | orchestrator | 2026-03-30 01:07:52.746948 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-30 01:07:52.746952 | orchestrator | Monday 30 March 2026 01:05:14 +0000 (0:00:00.163) 0:06:09.397 ********** 2026-03-30 01:07:52.746956 | orchestrator | 2026-03-30 01:07:52.746960 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-30 01:07:52.746964 | orchestrator | Monday 30 March 2026 01:05:14 +0000 (0:00:00.143) 0:06:09.541 ********** 2026-03-30 01:07:52.746968 | orchestrator | 2026-03-30 01:07:52.746972 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-30 01:07:52.746977 | orchestrator | Monday 30 March 2026 01:05:15 +0000 (0:00:00.312) 0:06:09.854 ********** 2026-03-30 01:07:52.746981 | orchestrator | 2026-03-30 01:07:52.746985 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-30 01:07:52.746989 | orchestrator | Monday 30 March 2026 01:05:15 +0000 (0:00:00.147) 0:06:10.001 ********** 2026-03-30 01:07:52.746993 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.746997 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.747001 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.747006 | orchestrator | 2026-03-30 01:07:52.747010 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-30 01:07:52.747014 | orchestrator | Monday 30 March 2026 01:05:22 +0000 (0:00:06.906) 0:06:16.907 ********** 2026-03-30 01:07:52.747018 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.747022 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.747026 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.747030 | orchestrator | 2026-03-30 01:07:52.747034 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-30 01:07:52.747038 | orchestrator | Monday 30 March 2026 01:05:33 +0000 (0:00:11.341) 0:06:28.249 ********** 2026-03-30 01:07:52.747042 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.747047 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.747051 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.747055 | orchestrator | 2026-03-30 01:07:52.747061 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-30 01:07:52.747066 | orchestrator | Monday 30 March 2026 01:05:50 +0000 (0:00:16.842) 0:06:45.092 ********** 2026-03-30 01:07:52.747070 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.747074 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.747078 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.747084 | orchestrator | 2026-03-30 01:07:52.747089 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-30 01:07:52.747093 | orchestrator | Monday 30 March 2026 01:06:18 +0000 (0:00:28.156) 0:07:13.248 ********** 2026-03-30 01:07:52.747097 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-30 01:07:52.747101 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-30 01:07:52.747107 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-30 01:07:52.747111 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.747116 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.747120 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.747124 | orchestrator | 2026-03-30 01:07:52.747128 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-30 01:07:52.747132 | orchestrator | Monday 30 March 2026 01:06:24 +0000 (0:00:06.154) 0:07:19.403 ********** 2026-03-30 01:07:52.747136 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.747140 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.747144 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.747148 | orchestrator | 2026-03-30 01:07:52.747153 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-30 01:07:52.747157 | orchestrator | Monday 30 March 2026 01:06:25 +0000 (0:00:00.791) 0:07:20.195 ********** 2026-03-30 01:07:52.747161 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:07:52.747165 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:07:52.747169 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:07:52.747173 | orchestrator | 2026-03-30 01:07:52.747177 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-30 01:07:52.747182 | orchestrator | Monday 30 March 2026 01:06:44 +0000 (0:00:18.800) 0:07:38.995 ********** 2026-03-30 01:07:52.747186 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.747190 | orchestrator | 2026-03-30 01:07:52.747194 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-30 01:07:52.747198 | orchestrator | Monday 30 March 2026 01:06:44 +0000 (0:00:00.217) 0:07:39.213 ********** 2026-03-30 01:07:52.747202 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.747207 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.747211 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747215 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747219 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747223 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-30 01:07:52.747228 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-30 01:07:52.747232 | orchestrator | 2026-03-30 01:07:52.747236 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-30 01:07:52.747240 | orchestrator | Monday 30 March 2026 01:07:05 +0000 (0:00:20.943) 0:08:00.156 ********** 2026-03-30 01:07:52.747244 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747248 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747252 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.747256 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747260 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.747264 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.747268 | orchestrator | 2026-03-30 01:07:52.747273 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-30 01:07:52.747277 | orchestrator | Monday 30 March 2026 01:07:13 +0000 (0:00:08.010) 0:08:08.167 ********** 2026-03-30 01:07:52.747281 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.747285 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.747289 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747293 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747300 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747304 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-30 01:07:52.747308 | orchestrator | 2026-03-30 01:07:52.747313 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-30 01:07:52.747317 | orchestrator | Monday 30 March 2026 01:07:17 +0000 (0:00:03.579) 0:08:11.746 ********** 2026-03-30 01:07:52.747321 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-30 01:07:52.747325 | orchestrator | 2026-03-30 01:07:52.747329 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-30 01:07:52.747333 | orchestrator | Monday 30 March 2026 01:07:30 +0000 (0:00:13.193) 0:08:24.940 ********** 2026-03-30 01:07:52.747337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-30 01:07:52.747341 | orchestrator | 2026-03-30 01:07:52.747345 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-30 01:07:52.747349 | orchestrator | Monday 30 March 2026 01:07:31 +0000 (0:00:01.238) 0:08:26.178 ********** 2026-03-30 01:07:52.747354 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.747358 | orchestrator | 2026-03-30 01:07:52.747362 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-30 01:07:52.747366 | orchestrator | Monday 30 March 2026 01:07:32 +0000 (0:00:01.285) 0:08:27.464 ********** 2026-03-30 01:07:52.747370 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-30 01:07:52.747374 | orchestrator | 2026-03-30 01:07:52.747378 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-30 01:07:52.747382 | orchestrator | Monday 30 March 2026 01:07:45 +0000 (0:00:12.686) 0:08:40.151 ********** 2026-03-30 01:07:52.747387 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:07:52.747391 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:07:52.747395 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:07:52.747401 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:07:52.747405 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:07:52.747409 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:07:52.747413 | orchestrator | 2026-03-30 01:07:52.747418 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-30 01:07:52.747433 | orchestrator | 2026-03-30 01:07:52.747440 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-30 01:07:52.747446 | orchestrator | Monday 30 March 2026 01:07:47 +0000 (0:00:01.743) 0:08:41.895 ********** 2026-03-30 01:07:52.747454 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:07:52.747460 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:07:52.747466 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:07:52.747477 | orchestrator | 2026-03-30 01:07:52.747484 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-30 01:07:52.747491 | orchestrator | 2026-03-30 01:07:52.747497 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-30 01:07:52.747508 | orchestrator | Monday 30 March 2026 01:07:48 +0000 (0:00:00.987) 0:08:42.882 ********** 2026-03-30 01:07:52.747515 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747522 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747529 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747535 | orchestrator | 2026-03-30 01:07:52.747543 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-30 01:07:52.747549 | orchestrator | 2026-03-30 01:07:52.747556 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-30 01:07:52.747563 | orchestrator | Monday 30 March 2026 01:07:48 +0000 (0:00:00.440) 0:08:43.323 ********** 2026-03-30 01:07:52.747570 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-30 01:07:52.747577 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-30 01:07:52.747583 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-30 01:07:52.747591 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-30 01:07:52.747607 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-30 01:07:52.747614 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-30 01:07:52.747622 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:07:52.747628 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-30 01:07:52.747633 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-30 01:07:52.747640 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-30 01:07:52.747646 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-30 01:07:52.747653 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-30 01:07:52.747660 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-30 01:07:52.747667 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-30 01:07:52.747675 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-30 01:07:52.747682 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-30 01:07:52.747689 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-30 01:07:52.747696 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-30 01:07:52.747700 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:07:52.747704 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-30 01:07:52.747708 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-30 01:07:52.747713 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-30 01:07:52.747717 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-30 01:07:52.747721 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-30 01:07:52.747725 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-30 01:07:52.747729 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-30 01:07:52.747733 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:07:52.747737 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-30 01:07:52.747742 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-30 01:07:52.747746 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-30 01:07:52.747750 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-30 01:07:52.747764 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747777 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-30 01:07:52.747784 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-30 01:07:52.747792 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747799 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-30 01:07:52.747806 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-30 01:07:52.747811 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-30 01:07:52.747815 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-30 01:07:52.747819 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-30 01:07:52.747823 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-30 01:07:52.747827 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747831 | orchestrator | 2026-03-30 01:07:52.747835 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-30 01:07:52.747840 | orchestrator | 2026-03-30 01:07:52.747844 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-30 01:07:52.747848 | orchestrator | Monday 30 March 2026 01:07:49 +0000 (0:00:01.109) 0:08:44.433 ********** 2026-03-30 01:07:52.747852 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-30 01:07:52.747856 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-30 01:07:52.747861 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747874 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-30 01:07:52.747878 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-30 01:07:52.747882 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747886 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-30 01:07:52.747891 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-30 01:07:52.747895 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747899 | orchestrator | 2026-03-30 01:07:52.747903 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-30 01:07:52.747908 | orchestrator | 2026-03-30 01:07:52.747914 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-30 01:07:52.747921 | orchestrator | Monday 30 March 2026 01:07:50 +0000 (0:00:00.629) 0:08:45.063 ********** 2026-03-30 01:07:52.747927 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747935 | orchestrator | 2026-03-30 01:07:52.747942 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-30 01:07:52.747949 | orchestrator | 2026-03-30 01:07:52.747958 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-30 01:07:52.747963 | orchestrator | Monday 30 March 2026 01:07:50 +0000 (0:00:00.600) 0:08:45.663 ********** 2026-03-30 01:07:52.747967 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:07:52.747972 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:07:52.747976 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:07:52.747980 | orchestrator | 2026-03-30 01:07:52.747984 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:07:52.747989 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 01:07:52.747994 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-30 01:07:52.747999 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-30 01:07:52.748003 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-30 01:07:52.748007 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-30 01:07:52.748014 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-30 01:07:52.748024 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-30 01:07:52.748032 | orchestrator | 2026-03-30 01:07:52.748039 | orchestrator | 2026-03-30 01:07:52.748045 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:07:52.748052 | orchestrator | Monday 30 March 2026 01:07:51 +0000 (0:00:00.463) 0:08:46.127 ********** 2026-03-30 01:07:52.748059 | orchestrator | =============================================================================== 2026-03-30 01:07:52.748065 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.01s 2026-03-30 01:07:52.748072 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 28.16s 2026-03-30 01:07:52.748078 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.03s 2026-03-30 01:07:52.748085 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.29s 2026-03-30 01:07:52.748093 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.94s 2026-03-30 01:07:52.748100 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.73s 2026-03-30 01:07:52.748113 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.80s 2026-03-30 01:07:52.748117 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.84s 2026-03-30 01:07:52.748121 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.33s 2026-03-30 01:07:52.748125 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.31s 2026-03-30 01:07:52.748129 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.55s 2026-03-30 01:07:52.748133 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.19s 2026-03-30 01:07:52.748137 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.96s 2026-03-30 01:07:52.748141 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.69s 2026-03-30 01:07:52.748145 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.34s 2026-03-30 01:07:52.748149 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.75s 2026-03-30 01:07:52.748154 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.64s 2026-03-30 01:07:52.748158 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.01s 2026-03-30 01:07:52.748162 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.88s 2026-03-30 01:07:52.748166 | orchestrator | nova : Restart nova-api container --------------------------------------- 7.13s 2026-03-30 01:07:52.748170 | orchestrator | 2026-03-30 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:55.789160 | orchestrator | 2026-03-30 01:07:55 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:55.789240 | orchestrator | 2026-03-30 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:07:58.833088 | orchestrator | 2026-03-30 01:07:58 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:07:58.833174 | orchestrator | 2026-03-30 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:01.867502 | orchestrator | 2026-03-30 01:08:01 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:01.867551 | orchestrator | 2026-03-30 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:04.908545 | orchestrator | 2026-03-30 01:08:04 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:04.908602 | orchestrator | 2026-03-30 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:07.954399 | orchestrator | 2026-03-30 01:08:07 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:07.954446 | orchestrator | 2026-03-30 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:10.993141 | orchestrator | 2026-03-30 01:08:10 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:10.993195 | orchestrator | 2026-03-30 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:14.033454 | orchestrator | 2026-03-30 01:08:14 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:14.033516 | orchestrator | 2026-03-30 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:17.069845 | orchestrator | 2026-03-30 01:08:17 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:17.069916 | orchestrator | 2026-03-30 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:20.112774 | orchestrator | 2026-03-30 01:08:20 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:20.112830 | orchestrator | 2026-03-30 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:23.152046 | orchestrator | 2026-03-30 01:08:23 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:23.152096 | orchestrator | 2026-03-30 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:26.196506 | orchestrator | 2026-03-30 01:08:26 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:26.196548 | orchestrator | 2026-03-30 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:29.231408 | orchestrator | 2026-03-30 01:08:29 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:29.231459 | orchestrator | 2026-03-30 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:32.278094 | orchestrator | 2026-03-30 01:08:32 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:32.278141 | orchestrator | 2026-03-30 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:35.323610 | orchestrator | 2026-03-30 01:08:35 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:35.323699 | orchestrator | 2026-03-30 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:38.361352 | orchestrator | 2026-03-30 01:08:38 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:38.361397 | orchestrator | 2026-03-30 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:41.398414 | orchestrator | 2026-03-30 01:08:41 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:41.398465 | orchestrator | 2026-03-30 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:44.440913 | orchestrator | 2026-03-30 01:08:44 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:44.440964 | orchestrator | 2026-03-30 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:47.486534 | orchestrator | 2026-03-30 01:08:47 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:47.486604 | orchestrator | 2026-03-30 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:50.528855 | orchestrator | 2026-03-30 01:08:50 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:50.528900 | orchestrator | 2026-03-30 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:53.574557 | orchestrator | 2026-03-30 01:08:53 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:53.574601 | orchestrator | 2026-03-30 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:56.615837 | orchestrator | 2026-03-30 01:08:56 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:56.615887 | orchestrator | 2026-03-30 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:08:59.654928 | orchestrator | 2026-03-30 01:08:59 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:08:59.655004 | orchestrator | 2026-03-30 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:02.708092 | orchestrator | 2026-03-30 01:09:02 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:02.708141 | orchestrator | 2026-03-30 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:05.754086 | orchestrator | 2026-03-30 01:09:05 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:05.754287 | orchestrator | 2026-03-30 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:08.802684 | orchestrator | 2026-03-30 01:09:08 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:08.802736 | orchestrator | 2026-03-30 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:11.850752 | orchestrator | 2026-03-30 01:09:11 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:11.850814 | orchestrator | 2026-03-30 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:14.885095 | orchestrator | 2026-03-30 01:09:14 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:14.885173 | orchestrator | 2026-03-30 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:17.925028 | orchestrator | 2026-03-30 01:09:17 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:17.925076 | orchestrator | 2026-03-30 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:20.967458 | orchestrator | 2026-03-30 01:09:20 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:20.967749 | orchestrator | 2026-03-30 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:24.015058 | orchestrator | 2026-03-30 01:09:24 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:24.015107 | orchestrator | 2026-03-30 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:27.058876 | orchestrator | 2026-03-30 01:09:27 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:27.058918 | orchestrator | 2026-03-30 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:30.108482 | orchestrator | 2026-03-30 01:09:30 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:30.108524 | orchestrator | 2026-03-30 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:33.153159 | orchestrator | 2026-03-30 01:09:33 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:33.153202 | orchestrator | 2026-03-30 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:36.207407 | orchestrator | 2026-03-30 01:09:36 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:36.207464 | orchestrator | 2026-03-30 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:39.255635 | orchestrator | 2026-03-30 01:09:39 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:39.255792 | orchestrator | 2026-03-30 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:42.302592 | orchestrator | 2026-03-30 01:09:42 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:42.302647 | orchestrator | 2026-03-30 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:45.345725 | orchestrator | 2026-03-30 01:09:45 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:45.345771 | orchestrator | 2026-03-30 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:48.391126 | orchestrator | 2026-03-30 01:09:48 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:48.391206 | orchestrator | 2026-03-30 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:51.429737 | orchestrator | 2026-03-30 01:09:51 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:51.429792 | orchestrator | 2026-03-30 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:54.476548 | orchestrator | 2026-03-30 01:09:54 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:54.476618 | orchestrator | 2026-03-30 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:09:57.515540 | orchestrator | 2026-03-30 01:09:57 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:09:57.515660 | orchestrator | 2026-03-30 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:00.567681 | orchestrator | 2026-03-30 01:10:00 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:00.567786 | orchestrator | 2026-03-30 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:03.612310 | orchestrator | 2026-03-30 01:10:03 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:03.612391 | orchestrator | 2026-03-30 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:06.655365 | orchestrator | 2026-03-30 01:10:06 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:06.655447 | orchestrator | 2026-03-30 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:09.698549 | orchestrator | 2026-03-30 01:10:09 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:09.698588 | orchestrator | 2026-03-30 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:12.745971 | orchestrator | 2026-03-30 01:10:12 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:12.746394 | orchestrator | 2026-03-30 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:15.785588 | orchestrator | 2026-03-30 01:10:15 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:15.785673 | orchestrator | 2026-03-30 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:18.830672 | orchestrator | 2026-03-30 01:10:18 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:18.830726 | orchestrator | 2026-03-30 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:21.884857 | orchestrator | 2026-03-30 01:10:21 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:21.884896 | orchestrator | 2026-03-30 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:24.943563 | orchestrator | 2026-03-30 01:10:24 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:24.943605 | orchestrator | 2026-03-30 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:27.987657 | orchestrator | 2026-03-30 01:10:27 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:27.987708 | orchestrator | 2026-03-30 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:31.027596 | orchestrator | 2026-03-30 01:10:31 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:31.027648 | orchestrator | 2026-03-30 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:34.062938 | orchestrator | 2026-03-30 01:10:34 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:34.062992 | orchestrator | 2026-03-30 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:37.097639 | orchestrator | 2026-03-30 01:10:37 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:37.097688 | orchestrator | 2026-03-30 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:40.143695 | orchestrator | 2026-03-30 01:10:40 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:40.143749 | orchestrator | 2026-03-30 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:43.170302 | orchestrator | 2026-03-30 01:10:43 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:43.170341 | orchestrator | 2026-03-30 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:46.217417 | orchestrator | 2026-03-30 01:10:46 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:46.217494 | orchestrator | 2026-03-30 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:49.261226 | orchestrator | 2026-03-30 01:10:49 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:49.261292 | orchestrator | 2026-03-30 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:52.310620 | orchestrator | 2026-03-30 01:10:52 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state STARTED 2026-03-30 01:10:52.310676 | orchestrator | 2026-03-30 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-30 01:10:55.355412 | orchestrator | 2026-03-30 01:10:55 | INFO  | Task c99825dc-fbc9-4dae-b813-83ca92fcbf8b is in state SUCCESS 2026-03-30 01:10:55.356903 | orchestrator | 2026-03-30 01:10:55.356940 | orchestrator | 2026-03-30 01:10:55.356946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:10:55.356951 | orchestrator | 2026-03-30 01:10:55.356963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:10:55.356967 | orchestrator | Monday 30 March 2026 01:06:15 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-03-30 01:10:55.356971 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.356976 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:10:55.356980 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:10:55.356984 | orchestrator | 2026-03-30 01:10:55.356988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:10:55.356992 | orchestrator | Monday 30 March 2026 01:06:16 +0000 (0:00:00.312) 0:00:00.637 ********** 2026-03-30 01:10:55.356995 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-30 01:10:55.356999 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-30 01:10:55.357003 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-30 01:10:55.357007 | orchestrator | 2026-03-30 01:10:55.357011 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-30 01:10:55.357015 | orchestrator | 2026-03-30 01:10:55.357019 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-30 01:10:55.357022 | orchestrator | Monday 30 March 2026 01:06:16 +0000 (0:00:00.294) 0:00:00.932 ********** 2026-03-30 01:10:55.357026 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:10:55.357031 | orchestrator | 2026-03-30 01:10:55.357034 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-30 01:10:55.357038 | orchestrator | Monday 30 March 2026 01:06:17 +0000 (0:00:00.679) 0:00:01.612 ********** 2026-03-30 01:10:55.357042 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-30 01:10:55.357046 | orchestrator | 2026-03-30 01:10:55.357050 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-30 01:10:55.357054 | orchestrator | Monday 30 March 2026 01:06:20 +0000 (0:00:03.796) 0:00:05.408 ********** 2026-03-30 01:10:55.357058 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-30 01:10:55.357062 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-30 01:10:55.357076 | orchestrator | 2026-03-30 01:10:55.357080 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-30 01:10:55.357084 | orchestrator | Monday 30 March 2026 01:06:26 +0000 (0:00:06.016) 0:00:11.424 ********** 2026-03-30 01:10:55.357088 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-30 01:10:55.357091 | orchestrator | 2026-03-30 01:10:55.357095 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-30 01:10:55.357099 | orchestrator | Monday 30 March 2026 01:06:29 +0000 (0:00:02.792) 0:00:14.217 ********** 2026-03-30 01:10:55.357103 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-30 01:10:55.357106 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-30 01:10:55.357110 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-30 01:10:55.357147 | orchestrator | 2026-03-30 01:10:55.357153 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-30 01:10:55.357160 | orchestrator | Monday 30 March 2026 01:06:36 +0000 (0:00:06.859) 0:00:21.076 ********** 2026-03-30 01:10:55.357166 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-30 01:10:55.357172 | orchestrator | 2026-03-30 01:10:55.357178 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-30 01:10:55.357185 | orchestrator | Monday 30 March 2026 01:06:39 +0000 (0:00:03.041) 0:00:24.117 ********** 2026-03-30 01:10:55.357190 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-30 01:10:55.357194 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-30 01:10:55.357197 | orchestrator | 2026-03-30 01:10:55.357201 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-30 01:10:55.357205 | orchestrator | Monday 30 March 2026 01:06:47 +0000 (0:00:07.482) 0:00:31.600 ********** 2026-03-30 01:10:55.357209 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-30 01:10:55.357212 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-30 01:10:55.357216 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-30 01:10:55.357220 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-30 01:10:55.357223 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-30 01:10:55.357227 | orchestrator | 2026-03-30 01:10:55.357231 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-30 01:10:55.357234 | orchestrator | Monday 30 March 2026 01:07:02 +0000 (0:00:15.183) 0:00:46.784 ********** 2026-03-30 01:10:55.357238 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:10:55.357242 | orchestrator | 2026-03-30 01:10:55.357257 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-30 01:10:55.357261 | orchestrator | Monday 30 March 2026 01:07:03 +0000 (0:00:00.705) 0:00:47.489 ********** 2026-03-30 01:10:55.357265 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357269 | orchestrator | 2026-03-30 01:10:55.357273 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-30 01:10:55.357276 | orchestrator | Monday 30 March 2026 01:07:07 +0000 (0:00:04.260) 0:00:51.750 ********** 2026-03-30 01:10:55.357280 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357284 | orchestrator | 2026-03-30 01:10:55.357288 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-30 01:10:55.357300 | orchestrator | Monday 30 March 2026 01:07:11 +0000 (0:00:03.956) 0:00:55.709 ********** 2026-03-30 01:10:55.357304 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357308 | orchestrator | 2026-03-30 01:10:55.357315 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-30 01:10:55.357319 | orchestrator | Monday 30 March 2026 01:07:14 +0000 (0:00:03.153) 0:00:58.863 ********** 2026-03-30 01:10:55.357323 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-30 01:10:55.357326 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-30 01:10:55.357335 | orchestrator | 2026-03-30 01:10:55.357339 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-30 01:10:55.357352 | orchestrator | Monday 30 March 2026 01:07:23 +0000 (0:00:08.746) 0:01:07.609 ********** 2026-03-30 01:10:55.357361 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-30 01:10:55.357365 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-30 01:10:55.357370 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-30 01:10:55.357374 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-30 01:10:55.357378 | orchestrator | 2026-03-30 01:10:55.357403 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-30 01:10:55.357407 | orchestrator | Monday 30 March 2026 01:07:38 +0000 (0:00:15.543) 0:01:23.153 ********** 2026-03-30 01:10:55.357411 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357415 | orchestrator | 2026-03-30 01:10:55.357419 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-30 01:10:55.357422 | orchestrator | Monday 30 March 2026 01:07:43 +0000 (0:00:04.907) 0:01:28.061 ********** 2026-03-30 01:10:55.357426 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357430 | orchestrator | 2026-03-30 01:10:55.357434 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-30 01:10:55.357438 | orchestrator | Monday 30 March 2026 01:07:49 +0000 (0:00:05.700) 0:01:33.761 ********** 2026-03-30 01:10:55.357442 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.357452 | orchestrator | 2026-03-30 01:10:55.357456 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-30 01:10:55.357460 | orchestrator | Monday 30 March 2026 01:07:49 +0000 (0:00:00.416) 0:01:34.178 ********** 2026-03-30 01:10:55.357463 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357467 | orchestrator | 2026-03-30 01:10:55.357475 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-30 01:10:55.357479 | orchestrator | Monday 30 March 2026 01:07:53 +0000 (0:00:03.380) 0:01:37.558 ********** 2026-03-30 01:10:55.357483 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:10:55.357487 | orchestrator | 2026-03-30 01:10:55.357491 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-30 01:10:55.357504 | orchestrator | Monday 30 March 2026 01:07:53 +0000 (0:00:00.743) 0:01:38.302 ********** 2026-03-30 01:10:55.357508 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357513 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357517 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357521 | orchestrator | 2026-03-30 01:10:55.357526 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-30 01:10:55.357530 | orchestrator | Monday 30 March 2026 01:08:00 +0000 (0:00:06.844) 0:01:45.146 ********** 2026-03-30 01:10:55.357534 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357539 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357543 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357547 | orchestrator | 2026-03-30 01:10:55.357552 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-30 01:10:55.357556 | orchestrator | Monday 30 March 2026 01:08:04 +0000 (0:00:04.280) 0:01:49.426 ********** 2026-03-30 01:10:55.357560 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357565 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357569 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357573 | orchestrator | 2026-03-30 01:10:55.357578 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-30 01:10:55.357585 | orchestrator | Monday 30 March 2026 01:08:05 +0000 (0:00:00.694) 0:01:50.121 ********** 2026-03-30 01:10:55.357605 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:10:55.357610 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357614 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:10:55.357618 | orchestrator | 2026-03-30 01:10:55.357623 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-30 01:10:55.357627 | orchestrator | Monday 30 March 2026 01:08:07 +0000 (0:00:01.476) 0:01:51.598 ********** 2026-03-30 01:10:55.357632 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357636 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357640 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357645 | orchestrator | 2026-03-30 01:10:55.357649 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-30 01:10:55.357653 | orchestrator | Monday 30 March 2026 01:08:08 +0000 (0:00:01.134) 0:01:52.732 ********** 2026-03-30 01:10:55.357658 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357662 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357666 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357671 | orchestrator | 2026-03-30 01:10:55.357675 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-30 01:10:55.357679 | orchestrator | Monday 30 March 2026 01:08:09 +0000 (0:00:01.073) 0:01:53.806 ********** 2026-03-30 01:10:55.357682 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357686 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357690 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357694 | orchestrator | 2026-03-30 01:10:55.357701 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-30 01:10:55.357707 | orchestrator | Monday 30 March 2026 01:08:11 +0000 (0:00:02.006) 0:01:55.812 ********** 2026-03-30 01:10:55.357711 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.357715 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.357719 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.357723 | orchestrator | 2026-03-30 01:10:55.357727 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-30 01:10:55.357731 | orchestrator | Monday 30 March 2026 01:08:12 +0000 (0:00:01.484) 0:01:57.297 ********** 2026-03-30 01:10:55.357737 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357743 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:10:55.357752 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:10:55.357758 | orchestrator | 2026-03-30 01:10:55.357765 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-30 01:10:55.357771 | orchestrator | Monday 30 March 2026 01:08:13 +0000 (0:00:00.541) 0:01:57.839 ********** 2026-03-30 01:10:55.357777 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357783 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:10:55.357789 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:10:55.357796 | orchestrator | 2026-03-30 01:10:55.357802 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-30 01:10:55.357809 | orchestrator | Monday 30 March 2026 01:08:16 +0000 (0:00:02.850) 0:02:00.690 ********** 2026-03-30 01:10:55.357815 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:10:55.357821 | orchestrator | 2026-03-30 01:10:55.357827 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-30 01:10:55.357834 | orchestrator | Monday 30 March 2026 01:08:16 +0000 (0:00:00.709) 0:02:01.399 ********** 2026-03-30 01:10:55.357839 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357843 | orchestrator | 2026-03-30 01:10:55.357846 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-30 01:10:55.357850 | orchestrator | Monday 30 March 2026 01:08:21 +0000 (0:00:04.413) 0:02:05.813 ********** 2026-03-30 01:10:55.357854 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357857 | orchestrator | 2026-03-30 01:10:55.357861 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-30 01:10:55.357869 | orchestrator | Monday 30 March 2026 01:08:24 +0000 (0:00:02.964) 0:02:08.777 ********** 2026-03-30 01:10:55.357873 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-30 01:10:55.357877 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-30 01:10:55.357881 | orchestrator | 2026-03-30 01:10:55.357885 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-30 01:10:55.357888 | orchestrator | Monday 30 March 2026 01:08:30 +0000 (0:00:06.579) 0:02:15.357 ********** 2026-03-30 01:10:55.357892 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357896 | orchestrator | 2026-03-30 01:10:55.357900 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-30 01:10:55.357904 | orchestrator | Monday 30 March 2026 01:08:34 +0000 (0:00:03.208) 0:02:18.565 ********** 2026-03-30 01:10:55.357907 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:10:55.357911 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:10:55.357915 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:10:55.357918 | orchestrator | 2026-03-30 01:10:55.357922 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-30 01:10:55.357926 | orchestrator | Monday 30 March 2026 01:08:34 +0000 (0:00:00.291) 0:02:18.857 ********** 2026-03-30 01:10:55.357932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.357941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.357947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.357955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.357960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.357964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.357968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.357973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.357982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.357987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.357994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.357998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358042 | orchestrator | 2026-03-30 01:10:55.358046 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-30 01:10:55.358052 | orchestrator | Monday 30 March 2026 01:08:36 +0000 (0:00:02.461) 0:02:21.318 ********** 2026-03-30 01:10:55.358056 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.358060 | orchestrator | 2026-03-30 01:10:55.358067 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-30 01:10:55.358073 | orchestrator | Monday 30 March 2026 01:08:36 +0000 (0:00:00.131) 0:02:21.450 ********** 2026-03-30 01:10:55.358077 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.358081 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:10:55.358085 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:10:55.358097 | orchestrator | 2026-03-30 01:10:55.358100 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-30 01:10:55.358104 | orchestrator | Monday 30 March 2026 01:08:37 +0000 (0:00:00.341) 0:02:21.791 ********** 2026-03-30 01:10:55.358108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358143 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.358153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358176 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:10:55.358180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358209 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:10:55.358213 | orchestrator | 2026-03-30 01:10:55.358217 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-30 01:10:55.358220 | orchestrator | Monday 30 March 2026 01:08:37 +0000 (0:00:00.674) 0:02:22.466 ********** 2026-03-30 01:10:55.358245 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:10:55.358250 | orchestrator | 2026-03-30 01:10:55.358254 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-30 01:10:55.358258 | orchestrator | Monday 30 March 2026 01:08:38 +0000 (0:00:00.715) 0:02:23.181 ********** 2026-03-30 01:10:55.358262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358344 | orchestrator | 2026-03-30 01:10:55.358348 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-30 01:10:55.358354 | orchestrator | Monday 30 March 2026 01:08:43 +0000 (0:00:04.785) 0:02:27.966 ********** 2026-03-30 01:10:55.358358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358381 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.358389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358409 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:10:55.358416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358440 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:10:55.358444 | orchestrator | 2026-03-30 01:10:55.358448 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-30 01:10:55.358452 | orchestrator | Monday 30 March 2026 01:08:44 +0000 (0:00:00.623) 0:02:28.590 ********** 2026-03-30 01:10:55.358456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358484 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.358488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358516 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:10:55.358520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-30 01:10:55.358524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-30 01:10:55.358528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-30 01:10:55.358539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-30 01:10:55.358542 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:10:55.358546 | orchestrator | 2026-03-30 01:10:55.358550 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-30 01:10:55.358554 | orchestrator | Monday 30 March 2026 01:08:45 +0000 (0:00:01.116) 0:02:29.706 ********** 2026-03-30 01:10:55.358564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358795 | orchestrator | 2026-03-30 01:10:55.358802 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-30 01:10:55.358806 | orchestrator | Monday 30 March 2026 01:08:50 +0000 (0:00:04.835) 0:02:34.541 ********** 2026-03-30 01:10:55.358810 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-30 01:10:55.358814 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-30 01:10:55.358818 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-30 01:10:55.358822 | orchestrator | 2026-03-30 01:10:55.358825 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-30 01:10:55.358829 | orchestrator | Monday 30 March 2026 01:08:51 +0000 (0:00:01.546) 0:02:36.088 ********** 2026-03-30 01:10:55.358833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.358851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.358866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.358913 | orchestrator | 2026-03-30 01:10:55.358919 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-30 01:10:55.358925 | orchestrator | Monday 30 March 2026 01:09:06 +0000 (0:00:15.196) 0:02:51.284 ********** 2026-03-30 01:10:55.358931 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.358937 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.358943 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.358950 | orchestrator | 2026-03-30 01:10:55.358955 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-30 01:10:55.358961 | orchestrator | Monday 30 March 2026 01:09:08 +0000 (0:00:01.828) 0:02:53.113 ********** 2026-03-30 01:10:55.358966 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.358974 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.358983 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.358990 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.358996 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359003 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359015 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359022 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359028 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359035 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359038 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359042 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359046 | orchestrator | 2026-03-30 01:10:55.359067 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-30 01:10:55.359071 | orchestrator | Monday 30 March 2026 01:09:13 +0000 (0:00:04.682) 0:02:57.795 ********** 2026-03-30 01:10:55.359075 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.359079 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.359083 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.359086 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359090 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359094 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359097 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359101 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359105 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359108 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359145 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359151 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359155 | orchestrator | 2026-03-30 01:10:55.359159 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-30 01:10:55.359163 | orchestrator | Monday 30 March 2026 01:09:18 +0000 (0:00:04.935) 0:03:02.730 ********** 2026-03-30 01:10:55.359166 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.359170 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.359174 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-30 01:10:55.359178 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359181 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359185 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-30 01:10:55.359189 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359193 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359197 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-30 01:10:55.359200 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359204 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359208 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-30 01:10:55.359212 | orchestrator | 2026-03-30 01:10:55.359215 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-30 01:10:55.359219 | orchestrator | Monday 30 March 2026 01:09:22 +0000 (0:00:04.507) 0:03:07.237 ********** 2026-03-30 01:10:55.359224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.359237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.359242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-30 01:10:55.359246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.359250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.359255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-30 01:10:55.359259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-30 01:10:55.359332 | orchestrator | 2026-03-30 01:10:55.359337 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-30 01:10:55.359343 | orchestrator | Monday 30 March 2026 01:09:26 +0000 (0:00:03.630) 0:03:10.868 ********** 2026-03-30 01:10:55.359348 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:10:55.359354 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:10:55.359360 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:10:55.359365 | orchestrator | 2026-03-30 01:10:55.359372 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-30 01:10:55.359379 | orchestrator | Monday 30 March 2026 01:09:26 +0000 (0:00:00.469) 0:03:11.338 ********** 2026-03-30 01:10:55.359386 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359392 | orchestrator | 2026-03-30 01:10:55.359399 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-30 01:10:55.359404 | orchestrator | Monday 30 March 2026 01:09:28 +0000 (0:00:01.976) 0:03:13.314 ********** 2026-03-30 01:10:55.359408 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359412 | orchestrator | 2026-03-30 01:10:55.359417 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-30 01:10:55.359421 | orchestrator | Monday 30 March 2026 01:09:30 +0000 (0:00:02.012) 0:03:15.327 ********** 2026-03-30 01:10:55.359426 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359430 | orchestrator | 2026-03-30 01:10:55.359435 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-30 01:10:55.359439 | orchestrator | Monday 30 March 2026 01:09:32 +0000 (0:00:02.078) 0:03:17.405 ********** 2026-03-30 01:10:55.359443 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359448 | orchestrator | 2026-03-30 01:10:55.359452 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-30 01:10:55.359456 | orchestrator | Monday 30 March 2026 01:09:35 +0000 (0:00:02.096) 0:03:19.502 ********** 2026-03-30 01:10:55.359460 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359465 | orchestrator | 2026-03-30 01:10:55.359469 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-30 01:10:55.359473 | orchestrator | Monday 30 March 2026 01:09:56 +0000 (0:00:21.777) 0:03:41.279 ********** 2026-03-30 01:10:55.359478 | orchestrator | 2026-03-30 01:10:55.359482 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-30 01:10:55.359487 | orchestrator | Monday 30 March 2026 01:09:56 +0000 (0:00:00.065) 0:03:41.345 ********** 2026-03-30 01:10:55.359491 | orchestrator | 2026-03-30 01:10:55.359499 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-30 01:10:55.359504 | orchestrator | Monday 30 March 2026 01:09:56 +0000 (0:00:00.063) 0:03:41.409 ********** 2026-03-30 01:10:55.359508 | orchestrator | 2026-03-30 01:10:55.359513 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-30 01:10:55.359517 | orchestrator | Monday 30 March 2026 01:09:57 +0000 (0:00:00.079) 0:03:41.489 ********** 2026-03-30 01:10:55.359522 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359526 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.359531 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.359535 | orchestrator | 2026-03-30 01:10:55.359540 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-30 01:10:55.359544 | orchestrator | Monday 30 March 2026 01:10:12 +0000 (0:00:15.624) 0:03:57.113 ********** 2026-03-30 01:10:55.359549 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359553 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.359557 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.359562 | orchestrator | 2026-03-30 01:10:55.359566 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-30 01:10:55.359570 | orchestrator | Monday 30 March 2026 01:10:23 +0000 (0:00:11.143) 0:04:08.257 ********** 2026-03-30 01:10:55.359575 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.359579 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.359583 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359588 | orchestrator | 2026-03-30 01:10:55.359592 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-30 01:10:55.359597 | orchestrator | Monday 30 March 2026 01:10:31 +0000 (0:00:08.180) 0:04:16.437 ********** 2026-03-30 01:10:55.359601 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359606 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.359610 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.359615 | orchestrator | 2026-03-30 01:10:55.359619 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-30 01:10:55.359623 | orchestrator | Monday 30 March 2026 01:10:41 +0000 (0:00:09.900) 0:04:26.338 ********** 2026-03-30 01:10:55.359628 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:10:55.359632 | orchestrator | changed: [testbed-node-1] 2026-03-30 01:10:55.359637 | orchestrator | changed: [testbed-node-2] 2026-03-30 01:10:55.359641 | orchestrator | 2026-03-30 01:10:55.359645 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:10:55.359650 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-30 01:10:55.359654 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 01:10:55.359658 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-30 01:10:55.359661 | orchestrator | 2026-03-30 01:10:55.359665 | orchestrator | 2026-03-30 01:10:55.359669 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:10:55.359673 | orchestrator | Monday 30 March 2026 01:10:52 +0000 (0:00:11.017) 0:04:37.355 ********** 2026-03-30 01:10:55.359680 | orchestrator | =============================================================================== 2026-03-30 01:10:55.359684 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.78s 2026-03-30 01:10:55.359691 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.62s 2026-03-30 01:10:55.359695 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.54s 2026-03-30 01:10:55.359699 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.20s 2026-03-30 01:10:55.359702 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.18s 2026-03-30 01:10:55.359709 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.14s 2026-03-30 01:10:55.359713 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.02s 2026-03-30 01:10:55.359716 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.90s 2026-03-30 01:10:55.359720 | orchestrator | octavia : Create security groups for octavia ---------------------------- 8.75s 2026-03-30 01:10:55.359724 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.18s 2026-03-30 01:10:55.359728 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.48s 2026-03-30 01:10:55.359732 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.86s 2026-03-30 01:10:55.359736 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.84s 2026-03-30 01:10:55.359740 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.58s 2026-03-30 01:10:55.359743 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.02s 2026-03-30 01:10:55.359747 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.70s 2026-03-30 01:10:55.359751 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.94s 2026-03-30 01:10:55.359755 | orchestrator | octavia : Create loadbalancer management network ------------------------ 4.91s 2026-03-30 01:10:55.359759 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.84s 2026-03-30 01:10:55.359763 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.79s 2026-03-30 01:10:55.359767 | orchestrator | 2026-03-30 01:10:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:10:58.393770 | orchestrator | 2026-03-30 01:10:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:01.423665 | orchestrator | 2026-03-30 01:11:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:04.467184 | orchestrator | 2026-03-30 01:11:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:07.511414 | orchestrator | 2026-03-30 01:11:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:10.560262 | orchestrator | 2026-03-30 01:11:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:13.604591 | orchestrator | 2026-03-30 01:11:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:16.647156 | orchestrator | 2026-03-30 01:11:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:19.687905 | orchestrator | 2026-03-30 01:11:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:22.732708 | orchestrator | 2026-03-30 01:11:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:25.780689 | orchestrator | 2026-03-30 01:11:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:28.820523 | orchestrator | 2026-03-30 01:11:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:31.857069 | orchestrator | 2026-03-30 01:11:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:34.900930 | orchestrator | 2026-03-30 01:11:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:37.944311 | orchestrator | 2026-03-30 01:11:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:40.986188 | orchestrator | 2026-03-30 01:11:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:44.027689 | orchestrator | 2026-03-30 01:11:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:47.067323 | orchestrator | 2026-03-30 01:11:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:50.107686 | orchestrator | 2026-03-30 01:11:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:53.151092 | orchestrator | 2026-03-30 01:11:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-30 01:11:56.192306 | orchestrator | 2026-03-30 01:11:56.398589 | orchestrator | 2026-03-30 01:11:56.403305 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 30 01:11:56 UTC 2026 2026-03-30 01:11:56.403349 | orchestrator | 2026-03-30 01:11:56.741227 | orchestrator | ok: Runtime: 0:32:26.119843 2026-03-30 01:11:57.001775 | 2026-03-30 01:11:57.001934 | TASK [Bootstrap services] 2026-03-30 01:11:57.815162 | orchestrator | 2026-03-30 01:11:57.815278 | orchestrator | # BOOTSTRAP 2026-03-30 01:11:57.815319 | orchestrator | 2026-03-30 01:11:57.815325 | orchestrator | + set -e 2026-03-30 01:11:57.815335 | orchestrator | + echo 2026-03-30 01:11:57.815345 | orchestrator | + echo '# BOOTSTRAP' 2026-03-30 01:11:57.815355 | orchestrator | + echo 2026-03-30 01:11:57.815376 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-30 01:11:57.822529 | orchestrator | + set -e 2026-03-30 01:11:57.822577 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-30 01:12:02.634276 | orchestrator | 2026-03-30 01:12:02 | INFO  | It takes a moment until task c804398f-a83c-40ae-8513-8c68312a1e86 (flavor-manager) has been started and output is visible here. 2026-03-30 01:12:11.509296 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1L-1 created 2026-03-30 01:12:11.509405 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1L-1-5 created 2026-03-30 01:12:11.509418 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1V-2 created 2026-03-30 01:12:11.509423 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1V-2-5 created 2026-03-30 01:12:11.509427 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1V-4 created 2026-03-30 01:12:11.509431 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1V-4-10 created 2026-03-30 01:12:11.509435 | orchestrator | 2026-03-30 01:12:07 | INFO  | Flavor SCS-1V-8 created 2026-03-30 01:12:11.509440 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-1V-8-20 created 2026-03-30 01:12:11.509451 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-2V-4 created 2026-03-30 01:12:11.509455 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-2V-4-10 created 2026-03-30 01:12:11.509459 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-2V-8 created 2026-03-30 01:12:11.509463 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-2V-8-20 created 2026-03-30 01:12:11.509467 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-2V-16 created 2026-03-30 01:12:11.509470 | orchestrator | 2026-03-30 01:12:08 | INFO  | Flavor SCS-2V-16-50 created 2026-03-30 01:12:11.509474 | orchestrator | 2026-03-30 01:12:09 | INFO  | Flavor SCS-4V-8 created 2026-03-30 01:12:11.509478 | orchestrator | 2026-03-30 01:12:09 | INFO  | Flavor SCS-4V-8-20 created 2026-03-30 01:12:11.509482 | orchestrator | 2026-03-30 01:12:09 | INFO  | Flavor SCS-4V-16 created 2026-03-30 01:12:11.509486 | orchestrator | 2026-03-30 01:12:09 | INFO  | Flavor SCS-4V-16-50 created 2026-03-30 01:12:11.509490 | orchestrator | 2026-03-30 01:12:09 | INFO  | Flavor SCS-4V-32 created 2026-03-30 01:12:11.509494 | orchestrator | 2026-03-30 01:12:09 | INFO  | Flavor SCS-4V-32-100 created 2026-03-30 01:12:11.509497 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-8V-16 created 2026-03-30 01:12:11.509501 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-8V-16-50 created 2026-03-30 01:12:11.509505 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-8V-32 created 2026-03-30 01:12:11.509509 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-8V-32-100 created 2026-03-30 01:12:11.509513 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-16V-32 created 2026-03-30 01:12:11.509518 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-16V-32-100 created 2026-03-30 01:12:11.509521 | orchestrator | 2026-03-30 01:12:10 | INFO  | Flavor SCS-2V-4-20s created 2026-03-30 01:12:11.509525 | orchestrator | 2026-03-30 01:12:11 | INFO  | Flavor SCS-4V-8-50s created 2026-03-30 01:12:11.509529 | orchestrator | 2026-03-30 01:12:11 | INFO  | Flavor SCS-4V-16-100s created 2026-03-30 01:12:11.509533 | orchestrator | 2026-03-30 01:12:11 | INFO  | Flavor SCS-8V-32-100s created 2026-03-30 01:12:13.086739 | orchestrator | 2026-03-30 01:12:13 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-30 01:12:23.230000 | orchestrator | 2026-03-30 01:12:23 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-30 01:12:23.303578 | orchestrator | 2026-03-30 01:12:23 | INFO  | Task e82ad314-23e2-4c10-8689-0c22f86d224b (bootstrap-basic) was prepared for execution. 2026-03-30 01:12:23.303687 | orchestrator | 2026-03-30 01:12:23 | INFO  | It takes a moment until task e82ad314-23e2-4c10-8689-0c22f86d224b (bootstrap-basic) has been started and output is visible here. 2026-03-30 01:13:09.875276 | orchestrator | 2026-03-30 01:13:09.875380 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-30 01:13:09.875390 | orchestrator | 2026-03-30 01:13:09.875395 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-30 01:13:09.875399 | orchestrator | Monday 30 March 2026 01:12:26 +0000 (0:00:00.098) 0:00:00.098 ********** 2026-03-30 01:13:09.875404 | orchestrator | ok: [localhost] 2026-03-30 01:13:09.875409 | orchestrator | 2026-03-30 01:13:09.875413 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-30 01:13:09.875496 | orchestrator | Monday 30 March 2026 01:12:28 +0000 (0:00:02.031) 0:00:02.129 ********** 2026-03-30 01:13:09.875509 | orchestrator | ok: [localhost] 2026-03-30 01:13:09.875518 | orchestrator | 2026-03-30 01:13:09.875523 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-30 01:13:09.875530 | orchestrator | Monday 30 March 2026 01:12:37 +0000 (0:00:09.439) 0:00:11.569 ********** 2026-03-30 01:13:09.875536 | orchestrator | changed: [localhost] 2026-03-30 01:13:09.875543 | orchestrator | 2026-03-30 01:13:09.875549 | orchestrator | TASK [Create public network] *************************************************** 2026-03-30 01:13:09.875555 | orchestrator | Monday 30 March 2026 01:12:45 +0000 (0:00:08.006) 0:00:19.575 ********** 2026-03-30 01:13:09.875561 | orchestrator | changed: [localhost] 2026-03-30 01:13:09.875567 | orchestrator | 2026-03-30 01:13:09.875577 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-30 01:13:09.875584 | orchestrator | Monday 30 March 2026 01:12:51 +0000 (0:00:05.375) 0:00:24.950 ********** 2026-03-30 01:13:09.875590 | orchestrator | changed: [localhost] 2026-03-30 01:13:09.875596 | orchestrator | 2026-03-30 01:13:09.875602 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-30 01:13:09.875608 | orchestrator | Monday 30 March 2026 01:12:57 +0000 (0:00:06.326) 0:00:31.277 ********** 2026-03-30 01:13:09.875614 | orchestrator | changed: [localhost] 2026-03-30 01:13:09.875621 | orchestrator | 2026-03-30 01:13:09.875627 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-30 01:13:09.875633 | orchestrator | Monday 30 March 2026 01:13:02 +0000 (0:00:04.393) 0:00:35.670 ********** 2026-03-30 01:13:09.875639 | orchestrator | changed: [localhost] 2026-03-30 01:13:09.875645 | orchestrator | 2026-03-30 01:13:09.875651 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-30 01:13:09.875667 | orchestrator | Monday 30 March 2026 01:13:06 +0000 (0:00:03.923) 0:00:39.594 ********** 2026-03-30 01:13:09.875673 | orchestrator | ok: [localhost] 2026-03-30 01:13:09.875679 | orchestrator | 2026-03-30 01:13:09.875685 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:13:09.875692 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-30 01:13:09.875699 | orchestrator | 2026-03-30 01:13:09.875705 | orchestrator | 2026-03-30 01:13:09.875711 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:13:09.875717 | orchestrator | Monday 30 March 2026 01:13:09 +0000 (0:00:03.677) 0:00:43.272 ********** 2026-03-30 01:13:09.875723 | orchestrator | =============================================================================== 2026-03-30 01:13:09.875729 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.44s 2026-03-30 01:13:09.875757 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.01s 2026-03-30 01:13:09.875764 | orchestrator | Set public network to default ------------------------------------------- 6.33s 2026-03-30 01:13:09.875770 | orchestrator | Create public network --------------------------------------------------- 5.38s 2026-03-30 01:13:09.875776 | orchestrator | Create public subnet ---------------------------------------------------- 4.39s 2026-03-30 01:13:09.875780 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.92s 2026-03-30 01:13:09.875784 | orchestrator | Create manager role ----------------------------------------------------- 3.68s 2026-03-30 01:13:09.875788 | orchestrator | Gathering Facts --------------------------------------------------------- 2.03s 2026-03-30 01:13:11.842169 | orchestrator | 2026-03-30 01:13:11 | INFO  | It takes a moment until task b0f9e3e3-fc03-4cfc-a366-85cd7d34decf (image-manager) has been started and output is visible here. 2026-03-30 01:13:54.425906 | orchestrator | 2026-03-30 01:13:14 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-30 01:13:54.425983 | orchestrator | 2026-03-30 01:13:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-30 01:13:54.425991 | orchestrator | 2026-03-30 01:13:14 | INFO  | Importing image Cirros 0.6.2 2026-03-30 01:13:54.425996 | orchestrator | 2026-03-30 01:13:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-30 01:13:54.426001 | orchestrator | 2026-03-30 01:13:17 | INFO  | Waiting for image to leave queued state... 2026-03-30 01:13:54.426007 | orchestrator | 2026-03-30 01:13:19 | INFO  | Waiting for import to complete... 2026-03-30 01:13:54.426038 | orchestrator | 2026-03-30 01:13:29 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-30 01:13:54.426044 | orchestrator | 2026-03-30 01:13:30 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-30 01:13:54.426048 | orchestrator | 2026-03-30 01:13:30 | INFO  | Setting internal_version = 0.6.2 2026-03-30 01:13:54.426052 | orchestrator | 2026-03-30 01:13:30 | INFO  | Setting image_original_user = cirros 2026-03-30 01:13:54.426057 | orchestrator | 2026-03-30 01:13:30 | INFO  | Adding tag os:cirros 2026-03-30 01:13:54.426061 | orchestrator | 2026-03-30 01:13:30 | INFO  | Setting property architecture: x86_64 2026-03-30 01:13:54.426065 | orchestrator | 2026-03-30 01:13:30 | INFO  | Setting property hw_disk_bus: scsi 2026-03-30 01:13:54.426069 | orchestrator | 2026-03-30 01:13:31 | INFO  | Setting property hw_rng_model: virtio 2026-03-30 01:13:54.426073 | orchestrator | 2026-03-30 01:13:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-30 01:13:54.426077 | orchestrator | 2026-03-30 01:13:31 | INFO  | Setting property hw_watchdog_action: reset 2026-03-30 01:13:54.426080 | orchestrator | 2026-03-30 01:13:32 | INFO  | Setting property hypervisor_type: qemu 2026-03-30 01:13:54.426090 | orchestrator | 2026-03-30 01:13:32 | INFO  | Setting property os_distro: cirros 2026-03-30 01:13:54.426095 | orchestrator | 2026-03-30 01:13:32 | INFO  | Setting property os_purpose: minimal 2026-03-30 01:13:54.426098 | orchestrator | 2026-03-30 01:13:32 | INFO  | Setting property replace_frequency: never 2026-03-30 01:13:54.426103 | orchestrator | 2026-03-30 01:13:32 | INFO  | Setting property uuid_validity: none 2026-03-30 01:13:54.426109 | orchestrator | 2026-03-30 01:13:33 | INFO  | Setting property provided_until: none 2026-03-30 01:13:54.426115 | orchestrator | 2026-03-30 01:13:33 | INFO  | Setting property image_description: Cirros 2026-03-30 01:13:54.426121 | orchestrator | 2026-03-30 01:13:33 | INFO  | Setting property image_name: Cirros 2026-03-30 01:13:54.426145 | orchestrator | 2026-03-30 01:13:33 | INFO  | Setting property internal_version: 0.6.2 2026-03-30 01:13:54.426151 | orchestrator | 2026-03-30 01:13:33 | INFO  | Setting property image_original_user: cirros 2026-03-30 01:13:54.426157 | orchestrator | 2026-03-30 01:13:34 | INFO  | Setting property os_version: 0.6.2 2026-03-30 01:13:54.426164 | orchestrator | 2026-03-30 01:13:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-30 01:13:54.426172 | orchestrator | 2026-03-30 01:13:34 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-30 01:13:54.426177 | orchestrator | 2026-03-30 01:13:34 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-30 01:13:54.426183 | orchestrator | 2026-03-30 01:13:34 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-30 01:13:54.426192 | orchestrator | 2026-03-30 01:13:34 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-30 01:13:54.426198 | orchestrator | 2026-03-30 01:13:35 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-30 01:13:54.426205 | orchestrator | 2026-03-30 01:13:35 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-30 01:13:54.426211 | orchestrator | 2026-03-30 01:13:35 | INFO  | Importing image Cirros 0.6.3 2026-03-30 01:13:54.426217 | orchestrator | 2026-03-30 01:13:35 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-30 01:13:54.426223 | orchestrator | 2026-03-30 01:13:36 | INFO  | Waiting for image to leave queued state... 2026-03-30 01:13:54.426229 | orchestrator | 2026-03-30 01:13:39 | INFO  | Waiting for import to complete... 2026-03-30 01:13:54.426250 | orchestrator | 2026-03-30 01:13:49 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-30 01:13:54.426257 | orchestrator | 2026-03-30 01:13:49 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-30 01:13:54.426264 | orchestrator | 2026-03-30 01:13:49 | INFO  | Setting internal_version = 0.6.3 2026-03-30 01:13:54.426269 | orchestrator | 2026-03-30 01:13:49 | INFO  | Setting image_original_user = cirros 2026-03-30 01:13:54.426272 | orchestrator | 2026-03-30 01:13:49 | INFO  | Adding tag os:cirros 2026-03-30 01:13:54.426276 | orchestrator | 2026-03-30 01:13:49 | INFO  | Setting property architecture: x86_64 2026-03-30 01:13:54.426280 | orchestrator | 2026-03-30 01:13:49 | INFO  | Setting property hw_disk_bus: scsi 2026-03-30 01:13:54.426284 | orchestrator | 2026-03-30 01:13:50 | INFO  | Setting property hw_rng_model: virtio 2026-03-30 01:13:54.426288 | orchestrator | 2026-03-30 01:13:50 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-30 01:13:54.426292 | orchestrator | 2026-03-30 01:13:50 | INFO  | Setting property hw_watchdog_action: reset 2026-03-30 01:13:54.426296 | orchestrator | 2026-03-30 01:13:50 | INFO  | Setting property hypervisor_type: qemu 2026-03-30 01:13:54.426300 | orchestrator | 2026-03-30 01:13:50 | INFO  | Setting property os_distro: cirros 2026-03-30 01:13:54.426304 | orchestrator | 2026-03-30 01:13:50 | INFO  | Setting property os_purpose: minimal 2026-03-30 01:13:54.426308 | orchestrator | 2026-03-30 01:13:51 | INFO  | Setting property replace_frequency: never 2026-03-30 01:13:54.426311 | orchestrator | 2026-03-30 01:13:51 | INFO  | Setting property uuid_validity: none 2026-03-30 01:13:54.426315 | orchestrator | 2026-03-30 01:13:51 | INFO  | Setting property provided_until: none 2026-03-30 01:13:54.426319 | orchestrator | 2026-03-30 01:13:51 | INFO  | Setting property image_description: Cirros 2026-03-30 01:13:54.426328 | orchestrator | 2026-03-30 01:13:51 | INFO  | Setting property image_name: Cirros 2026-03-30 01:13:54.426332 | orchestrator | 2026-03-30 01:13:51 | INFO  | Setting property internal_version: 0.6.3 2026-03-30 01:13:54.426336 | orchestrator | 2026-03-30 01:13:52 | INFO  | Setting property image_original_user: cirros 2026-03-30 01:13:54.426340 | orchestrator | 2026-03-30 01:13:52 | INFO  | Setting property os_version: 0.6.3 2026-03-30 01:13:54.426343 | orchestrator | 2026-03-30 01:13:52 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-30 01:13:54.426347 | orchestrator | 2026-03-30 01:13:52 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-30 01:13:54.426351 | orchestrator | 2026-03-30 01:13:53 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-30 01:13:54.426355 | orchestrator | 2026-03-30 01:13:53 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-30 01:13:54.426359 | orchestrator | 2026-03-30 01:13:53 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-30 01:13:54.677056 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-30 01:13:56.618169 | orchestrator | 2026-03-30 01:13:56 | INFO  | date: 2026-03-29 2026-03-30 01:13:56.618283 | orchestrator | 2026-03-30 01:13:56 | INFO  | image: octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-30 01:13:56.618350 | orchestrator | 2026-03-30 01:13:56 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-30 01:13:56.618695 | orchestrator | 2026-03-30 01:13:56 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2.CHECKSUM 2026-03-30 01:13:56.782106 | orchestrator | 2026-03-30 01:13:56 | INFO  | checksum: 5272c69684e4fe71f33dea08bbea00caea18adf692daa1ba22f6b007101fb94b 2026-03-30 01:13:56.872797 | orchestrator | 2026-03-30 01:13:56 | INFO  | It takes a moment until task ec982511-cec1-4390-9099-df21a66f6bab (image-manager) has been started and output is visible here. 2026-03-30 01:14:58.219448 | orchestrator | 2026-03-30 01:13:59 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-29' 2026-03-30 01:14:58.219504 | orchestrator | 2026-03-30 01:13:59 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2: 200 2026-03-30 01:14:58.219511 | orchestrator | 2026-03-30 01:13:59 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-29 2026-03-30 01:14:58.219515 | orchestrator | 2026-03-30 01:13:59 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-30 01:14:58.219520 | orchestrator | 2026-03-30 01:14:00 | INFO  | Waiting for image to leave queued state... 2026-03-30 01:14:58.219524 | orchestrator | 2026-03-30 01:14:02 | INFO  | Waiting for import to complete... 2026-03-30 01:14:58.219528 | orchestrator | 2026-03-30 01:14:12 | INFO  | Waiting for import to complete... 2026-03-30 01:14:58.219532 | orchestrator | 2026-03-30 01:14:22 | INFO  | Waiting for import to complete... 2026-03-30 01:14:58.219536 | orchestrator | 2026-03-30 01:14:32 | INFO  | Waiting for import to complete... 2026-03-30 01:14:58.219541 | orchestrator | 2026-03-30 01:14:42 | INFO  | Waiting for import to complete... 2026-03-30 01:14:58.219545 | orchestrator | 2026-03-30 01:14:53 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-29' successfully completed, reloading images 2026-03-30 01:14:58.219559 | orchestrator | 2026-03-30 01:14:53 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-29' 2026-03-30 01:14:58.219564 | orchestrator | 2026-03-30 01:14:53 | INFO  | Setting internal_version = 2026-03-29 2026-03-30 01:14:58.219567 | orchestrator | 2026-03-30 01:14:53 | INFO  | Setting image_original_user = ubuntu 2026-03-30 01:14:58.219571 | orchestrator | 2026-03-30 01:14:53 | INFO  | Adding tag amphora 2026-03-30 01:14:58.219575 | orchestrator | 2026-03-30 01:14:54 | INFO  | Adding tag os:ubuntu 2026-03-30 01:14:58.219579 | orchestrator | 2026-03-30 01:14:54 | INFO  | Setting property architecture: x86_64 2026-03-30 01:14:58.219583 | orchestrator | 2026-03-30 01:14:54 | INFO  | Setting property hw_disk_bus: scsi 2026-03-30 01:14:58.219587 | orchestrator | 2026-03-30 01:14:54 | INFO  | Setting property hw_rng_model: virtio 2026-03-30 01:14:58.219590 | orchestrator | 2026-03-30 01:14:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-30 01:14:58.219642 | orchestrator | 2026-03-30 01:14:55 | INFO  | Setting property hw_watchdog_action: reset 2026-03-30 01:14:58.219649 | orchestrator | 2026-03-30 01:14:55 | INFO  | Setting property hypervisor_type: qemu 2026-03-30 01:14:58.219655 | orchestrator | 2026-03-30 01:14:55 | INFO  | Setting property os_distro: ubuntu 2026-03-30 01:14:58.219662 | orchestrator | 2026-03-30 01:14:55 | INFO  | Setting property replace_frequency: quarterly 2026-03-30 01:14:58.219665 | orchestrator | 2026-03-30 01:14:55 | INFO  | Setting property uuid_validity: last-1 2026-03-30 01:14:58.219669 | orchestrator | 2026-03-30 01:14:56 | INFO  | Setting property provided_until: none 2026-03-30 01:14:58.219673 | orchestrator | 2026-03-30 01:14:56 | INFO  | Setting property os_purpose: network 2026-03-30 01:14:58.219677 | orchestrator | 2026-03-30 01:14:56 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-30 01:14:58.219689 | orchestrator | 2026-03-30 01:14:56 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-30 01:14:58.219693 | orchestrator | 2026-03-30 01:14:56 | INFO  | Setting property internal_version: 2026-03-29 2026-03-30 01:14:58.219696 | orchestrator | 2026-03-30 01:14:57 | INFO  | Setting property image_original_user: ubuntu 2026-03-30 01:14:58.219700 | orchestrator | 2026-03-30 01:14:57 | INFO  | Setting property os_version: 2026-03-29 2026-03-30 01:14:58.219704 | orchestrator | 2026-03-30 01:14:57 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-30 01:14:58.219708 | orchestrator | 2026-03-30 01:14:57 | INFO  | Setting property image_build_date: 2026-03-29 2026-03-30 01:14:58.219712 | orchestrator | 2026-03-30 01:14:57 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-29' 2026-03-30 01:14:58.219716 | orchestrator | 2026-03-30 01:14:57 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-29' 2026-03-30 01:14:58.219719 | orchestrator | 2026-03-30 01:14:58 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-30 01:14:58.219731 | orchestrator | 2026-03-30 01:14:58 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-30 01:14:58.219736 | orchestrator | 2026-03-30 01:14:58 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-30 01:14:58.219739 | orchestrator | 2026-03-30 01:14:58 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-30 01:14:58.633965 | orchestrator | ok: Runtime: 0:03:01.076274 2026-03-30 01:14:58.656921 | 2026-03-30 01:14:58.657124 | TASK [Run checks] 2026-03-30 01:14:59.330729 | orchestrator | + set -e 2026-03-30 01:14:59.330825 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-30 01:14:59.330835 | orchestrator | ++ export INTERACTIVE=false 2026-03-30 01:14:59.330843 | orchestrator | ++ INTERACTIVE=false 2026-03-30 01:14:59.330849 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-30 01:14:59.330854 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-30 01:14:59.330860 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-30 01:14:59.331497 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-30 01:14:59.335259 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 01:14:59.335302 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 01:14:59.335307 | orchestrator | + echo 2026-03-30 01:14:59.335318 | orchestrator | 2026-03-30 01:14:59.335323 | orchestrator | # CHECK 2026-03-30 01:14:59.335327 | orchestrator | 2026-03-30 01:14:59.335335 | orchestrator | + echo '# CHECK' 2026-03-30 01:14:59.335339 | orchestrator | + echo 2026-03-30 01:14:59.335421 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-30 01:14:59.336431 | orchestrator | ++ semver latest 5.0.0 2026-03-30 01:14:59.385872 | orchestrator | 2026-03-30 01:14:59.385925 | orchestrator | ## Containers @ testbed-manager 2026-03-30 01:14:59.385931 | orchestrator | 2026-03-30 01:14:59.385942 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-30 01:14:59.385947 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 01:14:59.385951 | orchestrator | + echo 2026-03-30 01:14:59.385955 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-30 01:14:59.385960 | orchestrator | + echo 2026-03-30 01:14:59.385963 | orchestrator | + osism container testbed-manager ps 2026-03-30 01:15:00.475813 | orchestrator | 2026-03-30 01:15:00 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-30 01:15:00.834222 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-30 01:15:00.834310 | orchestrator | 21121c11fb8c registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-03-30 01:15:00.834327 | orchestrator | 0a67c259e1b3 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-03-30 01:15:00.834335 | orchestrator | 7bce847fcf61 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-30 01:15:00.834339 | orchestrator | ecc32e58bb3c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-30 01:15:00.834350 | orchestrator | f08fc8e3ce26 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2026-03-30 01:15:00.834358 | orchestrator | a09bf154c204 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2026-03-30 01:15:00.834362 | orchestrator | 54ab1d744b1f registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-30 01:15:00.834366 | orchestrator | e10cbe5c9791 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-30 01:15:00.834381 | orchestrator | e2584066cec4 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-30 01:15:00.834386 | orchestrator | 1eb97e106790 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-03-30 01:15:00.834389 | orchestrator | 5646c518b1d6 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2026-03-30 01:15:00.834429 | orchestrator | 2f17433cfdab registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 30 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-03-30 01:15:00.834434 | orchestrator | d309a5c98256 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-30 01:15:00.834439 | orchestrator | fe0276f88e6f registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-03-30 01:15:00.834443 | orchestrator | c12325fee29b registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-03-30 01:15:00.834447 | orchestrator | 041f6ba039b2 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-ansible 2026-03-30 01:15:00.834453 | orchestrator | b22ebff6f21c registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-03-30 01:15:00.834457 | orchestrator | 0a8988ac71e4 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-03-30 01:15:00.834461 | orchestrator | 43ccdd1abca7 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-30 01:15:00.834465 | orchestrator | 0b71912a8ec0 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 37 minutes (healthy) osismclient 2026-03-30 01:15:00.834469 | orchestrator | dcfc8ba535df registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-beat-1 2026-03-30 01:15:00.834473 | orchestrator | ef716b776af4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-openstack-1 2026-03-30 01:15:00.834476 | orchestrator | 02eb4ad9aa9c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-30 01:15:00.834484 | orchestrator | 471725017f10 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-listener-1 2026-03-30 01:15:00.834488 | orchestrator | ecbbb5d074b1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-flower-1 2026-03-30 01:15:00.834492 | orchestrator | ae09e74a7cfa registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-30 01:15:00.834496 | orchestrator | 35ff07208836 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2026-03-30 01:15:00.834499 | orchestrator | caadf866a1bd registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-30 01:15:00.834507 | orchestrator | 54bf70a5070e registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-30 01:15:00.985164 | orchestrator | 2026-03-30 01:15:00.985228 | orchestrator | ## Images @ testbed-manager 2026-03-30 01:15:00.985238 | orchestrator | 2026-03-30 01:15:00.985246 | orchestrator | + echo 2026-03-30 01:15:00.985254 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-30 01:15:00.985258 | orchestrator | + echo 2026-03-30 01:15:00.985264 | orchestrator | + osism container testbed-manager images 2026-03-30 01:15:02.440775 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-30 01:15:02.440841 | orchestrator | registry.osism.tech/osism/osism-ansible latest 17e5cc029fb2 About an hour ago 638MB 2026-03-30 01:15:02.440872 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 1a30e393a6d9 About an hour ago 635MB 2026-03-30 01:15:02.440877 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 7a3c88dcf765 About an hour ago 1.24GB 2026-03-30 01:15:02.440881 | orchestrator | registry.osism.tech/osism/ceph-ansible reef ad27cd62c234 About an hour ago 585MB 2026-03-30 01:15:02.440896 | orchestrator | registry.osism.tech/osism/osism-frontend latest 3c159737af85 About an hour ago 212MB 2026-03-30 01:15:02.440900 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest d325a86980ab About an hour ago 357MB 2026-03-30 01:15:02.440904 | orchestrator | registry.osism.tech/osism/osism latest 35921e4d4fa2 6 hours ago 406MB 2026-03-30 01:15:02.440908 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a48e41e42567 20 hours ago 590MB 2026-03-30 01:15:02.440911 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b0ed8fd8634a 20 hours ago 679MB 2026-03-30 01:15:02.440915 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8f956ae648f4 20 hours ago 277MB 2026-03-30 01:15:02.440919 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 492307b4704c 20 hours ago 319MB 2026-03-30 01:15:02.440923 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 4977914eb0dc 20 hours ago 415MB 2026-03-30 01:15:02.440927 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b915b783a81f 20 hours ago 368MB 2026-03-30 01:15:02.440930 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 66e18ebb6c85 20 hours ago 850MB 2026-03-30 01:15:02.440944 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 dc1f484af6ce 20 hours ago 317MB 2026-03-30 01:15:02.440948 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 28687ea1626c 21 hours ago 239MB 2026-03-30 01:15:02.440951 | orchestrator | registry.osism.tech/osism/cephclient reef 5a3909e91e81 21 hours ago 453MB 2026-03-30 01:15:02.440955 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-03-30 01:15:02.440959 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-30 01:15:02.440963 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-30 01:15:02.440966 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-30 01:15:02.440970 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-30 01:15:02.440974 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-30 01:15:02.440978 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-30 01:15:02.595621 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-30 01:15:02.595682 | orchestrator | ++ semver latest 5.0.0 2026-03-30 01:15:02.644334 | orchestrator | 2026-03-30 01:15:02.644396 | orchestrator | ## Containers @ testbed-node-0 2026-03-30 01:15:02.644406 | orchestrator | 2026-03-30 01:15:02.644413 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-30 01:15:02.644419 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 01:15:02.644427 | orchestrator | + echo 2026-03-30 01:15:02.644434 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-30 01:15:02.644441 | orchestrator | + echo 2026-03-30 01:15:02.644449 | orchestrator | + osism container testbed-node-0 ps 2026-03-30 01:15:04.087059 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-30 01:15:04.087114 | orchestrator | 32520eeb2429 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-30 01:15:04.087120 | orchestrator | e8069ee4dac9 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-30 01:15:04.087125 | orchestrator | 5355e59a30bd registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-30 01:15:04.087129 | orchestrator | 9c1534d9b538 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-30 01:15:04.087133 | orchestrator | ba83f12d5af5 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-30 01:15:04.087137 | orchestrator | 552767c7b674 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-03-30 01:15:04.087141 | orchestrator | 26bd3e289207 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-30 01:15:04.087154 | orchestrator | 4cfee0ebad86 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-03-30 01:15:04.087158 | orchestrator | 9675130bc668 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) placement_api 2026-03-30 01:15:04.087177 | orchestrator | e74e58c13f37 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-30 01:15:04.087181 | orchestrator | de46b8181e72 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-30 01:15:04.087185 | orchestrator | 9960f4cee02b registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-03-30 01:15:04.087189 | orchestrator | 7fd7bd70b8f5 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-03-30 01:15:04.087193 | orchestrator | a4f24b2145a7 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-30 01:15:04.087197 | orchestrator | be13b5fb8a18 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-30 01:15:04.087201 | orchestrator | 65be38cbcc2b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-30 01:15:04.087204 | orchestrator | fea60562b0a8 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-30 01:15:04.087208 | orchestrator | 8cb8d25d25a9 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-03-30 01:15:04.087212 | orchestrator | 9a7a4593d598 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-30 01:15:04.087216 | orchestrator | fe0dbf67b8f8 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-30 01:15:04.087220 | orchestrator | 0a95b16756ea registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-30 01:15:04.087232 | orchestrator | 0ec621a02e4b registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-30 01:15:04.087236 | orchestrator | 02e55061352a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-30 01:15:04.087240 | orchestrator | b6ea06255f3b registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-30 01:15:04.087244 | orchestrator | ddf55d674e86 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-30 01:15:04.087250 | orchestrator | 77e6dafc0fab registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-30 01:15:04.087254 | orchestrator | 8e74bc51346c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-30 01:15:04.087258 | orchestrator | 4403b9b19051 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-03-30 01:15:04.087269 | orchestrator | d90fb6b9e606 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-03-30 01:15:04.087277 | orchestrator | dd33ad207603 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-30 01:15:04.087281 | orchestrator | c6ea4e582714 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-30 01:15:04.087285 | orchestrator | d25a1d6010d8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-30 01:15:04.087288 | orchestrator | f1a69b90f5a0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-30 01:15:04.087292 | orchestrator | 7892ec75616a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2026-03-30 01:15:04.087296 | orchestrator | b8ffb29ef02f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-30 01:15:04.087300 | orchestrator | 97fa6cd7a0a4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-30 01:15:04.087304 | orchestrator | bd98c2bde491 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-03-30 01:15:04.087307 | orchestrator | bd786b82ae58 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-30 01:15:04.087311 | orchestrator | 71a795bcdd65 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-03-30 01:15:04.087315 | orchestrator | 0670ac6c8c77 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-03-30 01:15:04.087319 | orchestrator | 82f812bf3211 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-30 01:15:04.087323 | orchestrator | b74415150006 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-03-30 01:15:04.087327 | orchestrator | 8b2082d5bd1b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-30 01:15:04.087331 | orchestrator | e192936c9dd2 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-30 01:15:04.087338 | orchestrator | 43043add7209 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-30 01:15:04.087342 | orchestrator | 0636047f0278 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-03-30 01:15:04.087346 | orchestrator | e25e5178e210 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-30 01:15:04.087350 | orchestrator | ffb61b564198 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-30 01:15:04.087357 | orchestrator | c6897091ef82 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2026-03-30 01:15:04.087367 | orchestrator | 1d2bcd83e490 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-30 01:15:04.087371 | orchestrator | dcc489d3ff2e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-30 01:15:04.087379 | orchestrator | 19394ec8f6a1 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-03-30 01:15:04.087383 | orchestrator | 2697b5f2f718 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-30 01:15:04.087387 | orchestrator | f5a0773bd7e8 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-30 01:15:04.087393 | orchestrator | 6e9366bc517c registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-03-30 01:15:04.087398 | orchestrator | 1d75ead39bbe registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-03-30 01:15:04.087401 | orchestrator | e91b264be814 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-30 01:15:04.087405 | orchestrator | 3064b09ee3b4 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-30 01:15:04.087409 | orchestrator | b7b881c668b7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-30 01:15:04.221852 | orchestrator | 2026-03-30 01:15:04.221912 | orchestrator | ## Images @ testbed-node-0 2026-03-30 01:15:04.221921 | orchestrator | 2026-03-30 01:15:04.221928 | orchestrator | + echo 2026-03-30 01:15:04.221943 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-30 01:15:04.221950 | orchestrator | + echo 2026-03-30 01:15:04.221956 | orchestrator | + osism container testbed-node-0 images 2026-03-30 01:15:05.654600 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-30 01:15:05.654684 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 9cfce92a0f75 20 hours ago 287MB 2026-03-30 01:15:05.654695 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f40f68b0ffca 20 hours ago 1.54GB 2026-03-30 01:15:05.654702 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 d0b034b15d80 20 hours ago 1.57GB 2026-03-30 01:15:05.654709 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a48e41e42567 20 hours ago 590MB 2026-03-30 01:15:05.654716 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 5b504ae41198 20 hours ago 277MB 2026-03-30 01:15:05.654722 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 682fb0e09905 20 hours ago 1.04GB 2026-03-30 01:15:05.654729 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 1fa6055c0f03 20 hours ago 427MB 2026-03-30 01:15:05.654736 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 d297e18fd448 20 hours ago 333MB 2026-03-30 01:15:05.654743 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b0ed8fd8634a 20 hours ago 679MB 2026-03-30 01:15:05.654749 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8f956ae648f4 20 hours ago 277MB 2026-03-30 01:15:05.654769 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 22608b4515e9 20 hours ago 285MB 2026-03-30 01:15:05.654777 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a232515de8c1 20 hours ago 290MB 2026-03-30 01:15:05.654783 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 946763b90b9c 20 hours ago 290MB 2026-03-30 01:15:05.654789 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 1015b84f4bc5 20 hours ago 284MB 2026-03-30 01:15:05.654796 | orchestrator | registry.osism.tech/kolla/redis 2024.2 af206e379f36 20 hours ago 284MB 2026-03-30 01:15:05.654803 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 52fef9efef08 20 hours ago 1.16GB 2026-03-30 01:15:05.654820 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 e1535e391577 20 hours ago 463MB 2026-03-30 01:15:05.654827 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 6288a60905f4 20 hours ago 309MB 2026-03-30 01:15:05.654833 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b915b783a81f 20 hours ago 368MB 2026-03-30 01:15:05.654840 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 51a181b67679 20 hours ago 303MB 2026-03-30 01:15:05.654846 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e19aae3261fc 20 hours ago 312MB 2026-03-30 01:15:05.654852 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 dc1f484af6ce 20 hours ago 317MB 2026-03-30 01:15:05.654858 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 7b55e5fce6e1 20 hours ago 851MB 2026-03-30 01:15:05.654865 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 483b8029c699 20 hours ago 851MB 2026-03-30 01:15:05.654871 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d5fd6e3acbdd 20 hours ago 851MB 2026-03-30 01:15:05.654878 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 934e976828e0 20 hours ago 851MB 2026-03-30 01:15:05.654884 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 4a568b6f8264 20 hours ago 1.08GB 2026-03-30 01:15:05.654891 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 7ee451eaeac5 20 hours ago 1.05GB 2026-03-30 01:15:05.654898 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 7b9c7496f625 20 hours ago 1.05GB 2026-03-30 01:15:05.654904 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 643f0b0be3a2 20 hours ago 987MB 2026-03-30 01:15:05.654910 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 776c81b1f60b 20 hours ago 987MB 2026-03-30 01:15:05.654917 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 c37fa13049ac 20 hours ago 1.06GB 2026-03-30 01:15:05.654923 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a8b39d6da64b 20 hours ago 1.06GB 2026-03-30 01:15:05.654930 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 13f6bf0f0dcf 20 hours ago 1.04GB 2026-03-30 01:15:05.654937 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 8b653ea42b41 20 hours ago 1.04GB 2026-03-30 01:15:05.654944 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 b64e7116242d 20 hours ago 1.04GB 2026-03-30 01:15:05.654969 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 7f9661000e8a 20 hours ago 986MB 2026-03-30 01:15:05.654976 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 5642067bc77f 20 hours ago 985MB 2026-03-30 01:15:05.654983 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 9c53c78d7138 20 hours ago 985MB 2026-03-30 01:15:05.654989 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 d8cff4761fa8 20 hours ago 985MB 2026-03-30 01:15:05.655003 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 83061554bcb7 20 hours ago 984MB 2026-03-30 01:15:05.655009 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 3cef035d4600 20 hours ago 1.11GB 2026-03-30 01:15:05.655015 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 fc1276fc9f4e 20 hours ago 1.73GB 2026-03-30 01:15:05.655022 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 27e531b14deb 20 hours ago 1.42GB 2026-03-30 01:15:05.655029 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 b8556d3ff65e 20 hours ago 1.42GB 2026-03-30 01:15:05.655039 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 c6b9cc5969aa 20 hours ago 1.42GB 2026-03-30 01:15:05.655047 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 2a80c3261709 20 hours ago 1.17GB 2026-03-30 01:15:05.655053 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 db13f0e1e858 20 hours ago 1.06GB 2026-03-30 01:15:05.655059 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 dc7c295c6f38 20 hours ago 1GB 2026-03-30 01:15:05.655066 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6ccf0338ba1e 20 hours ago 1GB 2026-03-30 01:15:05.655073 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 cdb2e9569889 20 hours ago 1GB 2026-03-30 01:15:05.655079 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 8c4d3720ca10 20 hours ago 1GB 2026-03-30 01:15:05.655086 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 e82ee4db0b12 20 hours ago 1.25GB 2026-03-30 01:15:05.655092 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 47d7fa36e841 20 hours ago 1.14GB 2026-03-30 01:15:05.655098 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 39255c0badab 20 hours ago 1e+03MB 2026-03-30 01:15:05.655105 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 787c3d891046 20 hours ago 995MB 2026-03-30 01:15:05.655112 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 eb6e5750013f 20 hours ago 995MB 2026-03-30 01:15:05.655118 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 bcbe0c954062 20 hours ago 1e+03MB 2026-03-30 01:15:05.655125 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 9bce105dbe67 20 hours ago 994MB 2026-03-30 01:15:05.655131 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3cc6be150a73 20 hours ago 995MB 2026-03-30 01:15:05.655138 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 277253602b1a 20 hours ago 1.22GB 2026-03-30 01:15:05.655145 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4a428829c478 20 hours ago 1.38GB 2026-03-30 01:15:05.655152 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b8084d706d3b 20 hours ago 1.22GB 2026-03-30 01:15:05.655159 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 09e818143343 20 hours ago 1.22GB 2026-03-30 01:15:05.655167 | orchestrator | registry.osism.tech/osism/ceph-daemon reef b4f4bc508824 21 hours ago 1.35GB 2026-03-30 01:15:05.816649 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-30 01:15:05.817529 | orchestrator | ++ semver latest 5.0.0 2026-03-30 01:15:05.861693 | orchestrator | 2026-03-30 01:15:05.861747 | orchestrator | ## Containers @ testbed-node-1 2026-03-30 01:15:05.861754 | orchestrator | 2026-03-30 01:15:05.861759 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-30 01:15:05.861763 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 01:15:05.861767 | orchestrator | + echo 2026-03-30 01:15:05.861771 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-30 01:15:05.861775 | orchestrator | + echo 2026-03-30 01:15:05.861779 | orchestrator | + osism container testbed-node-1 ps 2026-03-30 01:15:07.309827 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-30 01:15:07.309890 | orchestrator | e19da11b55ad registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-30 01:15:07.309911 | orchestrator | 7baafe98105e registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-30 01:15:07.309925 | orchestrator | 2f4101daed26 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-30 01:15:07.309932 | orchestrator | 7ec568957baa registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-30 01:15:07.309938 | orchestrator | c43413e87704 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-30 01:15:07.309957 | orchestrator | 09e29d9faecd registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-30 01:15:07.309964 | orchestrator | 832d0be5f978 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-03-30 01:15:07.309971 | orchestrator | 2d7864831d53 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-03-30 01:15:07.309980 | orchestrator | 8bb19ec57985 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) placement_api 2026-03-30 01:15:07.309987 | orchestrator | df9ce988ce77 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-30 01:15:07.309993 | orchestrator | f0d8fdfba513 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-30 01:15:07.309999 | orchestrator | fd7745b7bc39 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-03-30 01:15:07.310005 | orchestrator | 5274e5516015 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-03-30 01:15:07.310011 | orchestrator | e0daf12e0128 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-30 01:15:07.310043 | orchestrator | cdfc252aa554 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-30 01:15:07.310050 | orchestrator | 31ec534688ed registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-30 01:15:07.310056 | orchestrator | 85c38a1f81c2 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-30 01:15:07.310062 | orchestrator | 9c10bccc2afa registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-03-30 01:15:07.310068 | orchestrator | e683ddaf0556 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-30 01:15:07.310088 | orchestrator | 46ac93eded02 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-30 01:15:07.310095 | orchestrator | f19b73dbac46 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-30 01:15:07.310111 | orchestrator | e049d027ed79 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-30 01:15:07.310118 | orchestrator | a3269949b13f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-30 01:15:07.310124 | orchestrator | 3792bb9cf76d registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-30 01:15:07.310130 | orchestrator | ff90465468e3 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-30 01:15:07.310137 | orchestrator | 66c165569e79 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-30 01:15:07.310143 | orchestrator | 3147141e05f5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-30 01:15:07.310154 | orchestrator | bddbde883c9b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-03-30 01:15:07.310160 | orchestrator | a2af9c3d74c8 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-03-30 01:15:07.310167 | orchestrator | a73cf1b2a787 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-30 01:15:07.310173 | orchestrator | df0bdb8ce454 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-30 01:15:07.310180 | orchestrator | cf1a0af11328 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-30 01:15:07.310187 | orchestrator | 76eee879a5f1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-30 01:15:07.310193 | orchestrator | 7c3cbd154a4e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-03-30 01:15:07.310200 | orchestrator | 53fe91869591 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-30 01:15:07.310206 | orchestrator | c874320f9b19 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-03-30 01:15:07.310212 | orchestrator | 00be52bf2208 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-30 01:15:07.310217 | orchestrator | 13ad8458a172 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-03-30 01:15:07.310223 | orchestrator | b8d20a1ad903 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-03-30 01:15:07.310236 | orchestrator | ba179a439357 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-03-30 01:15:07.310243 | orchestrator | c34ebc696ff5 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-03-30 01:15:07.310250 | orchestrator | d2375b59edf9 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-30 01:15:07.310257 | orchestrator | e62866cdbde8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-03-30 01:15:07.310263 | orchestrator | db771278032f registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-30 01:15:07.310276 | orchestrator | 5c8cbde5497c registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-30 01:15:07.310283 | orchestrator | 37838aa29fd1 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-03-30 01:15:07.310289 | orchestrator | df61b990c5d6 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2026-03-30 01:15:07.310296 | orchestrator | c8fbd564d0a4 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2026-03-30 01:15:07.310303 | orchestrator | 6ec054c70582 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-03-30 01:15:07.310309 | orchestrator | 88ceb22ecbeb registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-30 01:15:07.310318 | orchestrator | 8bf5889159fb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-03-30 01:15:07.310325 | orchestrator | c1f0a665bea6 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-30 01:15:07.310335 | orchestrator | 40cc2d472873 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-30 01:15:07.310342 | orchestrator | 67f9bf3cf1c3 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-30 01:15:07.310349 | orchestrator | b89d25be00ab registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-03-30 01:15:07.310356 | orchestrator | 6e839b61a3d1 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-03-30 01:15:07.310362 | orchestrator | a5f6f9b3624d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-30 01:15:07.310369 | orchestrator | 5d4932d364bd registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-30 01:15:07.310380 | orchestrator | 26668def9688 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-30 01:15:07.442171 | orchestrator | 2026-03-30 01:15:07.442250 | orchestrator | ## Images @ testbed-node-1 2026-03-30 01:15:07.442262 | orchestrator | 2026-03-30 01:15:07.442270 | orchestrator | + echo 2026-03-30 01:15:07.442277 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-30 01:15:07.442292 | orchestrator | + echo 2026-03-30 01:15:07.442300 | orchestrator | + osism container testbed-node-1 images 2026-03-30 01:15:08.906285 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-30 01:15:08.906340 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 9cfce92a0f75 20 hours ago 287MB 2026-03-30 01:15:08.906346 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f40f68b0ffca 20 hours ago 1.54GB 2026-03-30 01:15:08.906351 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 d0b034b15d80 20 hours ago 1.57GB 2026-03-30 01:15:08.906356 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a48e41e42567 20 hours ago 590MB 2026-03-30 01:15:08.906361 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 5b504ae41198 20 hours ago 277MB 2026-03-30 01:15:08.906365 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 1fa6055c0f03 20 hours ago 427MB 2026-03-30 01:15:08.906370 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 682fb0e09905 20 hours ago 1.04GB 2026-03-30 01:15:08.906374 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 d297e18fd448 20 hours ago 333MB 2026-03-30 01:15:08.906379 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b0ed8fd8634a 20 hours ago 679MB 2026-03-30 01:15:08.906383 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8f956ae648f4 20 hours ago 277MB 2026-03-30 01:15:08.906388 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 22608b4515e9 20 hours ago 285MB 2026-03-30 01:15:08.906392 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a232515de8c1 20 hours ago 290MB 2026-03-30 01:15:08.906397 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 946763b90b9c 20 hours ago 290MB 2026-03-30 01:15:08.906401 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 1015b84f4bc5 20 hours ago 284MB 2026-03-30 01:15:08.906406 | orchestrator | registry.osism.tech/kolla/redis 2024.2 af206e379f36 20 hours ago 284MB 2026-03-30 01:15:08.906410 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 52fef9efef08 20 hours ago 1.16GB 2026-03-30 01:15:08.906415 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 e1535e391577 20 hours ago 463MB 2026-03-30 01:15:08.906419 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 6288a60905f4 20 hours ago 309MB 2026-03-30 01:15:08.906424 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b915b783a81f 20 hours ago 368MB 2026-03-30 01:15:08.906428 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 51a181b67679 20 hours ago 303MB 2026-03-30 01:15:08.906433 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e19aae3261fc 20 hours ago 312MB 2026-03-30 01:15:08.906438 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 dc1f484af6ce 20 hours ago 317MB 2026-03-30 01:15:08.906442 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 7b55e5fce6e1 20 hours ago 851MB 2026-03-30 01:15:08.906446 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d5fd6e3acbdd 20 hours ago 851MB 2026-03-30 01:15:08.906451 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 483b8029c699 20 hours ago 851MB 2026-03-30 01:15:08.906467 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 934e976828e0 20 hours ago 851MB 2026-03-30 01:15:08.906472 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 4a568b6f8264 20 hours ago 1.08GB 2026-03-30 01:15:08.906477 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 7ee451eaeac5 20 hours ago 1.05GB 2026-03-30 01:15:08.906481 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 7b9c7496f625 20 hours ago 1.05GB 2026-03-30 01:15:08.906486 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 c37fa13049ac 20 hours ago 1.06GB 2026-03-30 01:15:08.906491 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a8b39d6da64b 20 hours ago 1.06GB 2026-03-30 01:15:08.906495 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 13f6bf0f0dcf 20 hours ago 1.04GB 2026-03-30 01:15:08.906500 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 8b653ea42b41 20 hours ago 1.04GB 2026-03-30 01:15:08.906516 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 b64e7116242d 20 hours ago 1.04GB 2026-03-30 01:15:08.906526 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 7f9661000e8a 20 hours ago 986MB 2026-03-30 01:15:08.906533 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 3cef035d4600 20 hours ago 1.11GB 2026-03-30 01:15:08.906551 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 fc1276fc9f4e 20 hours ago 1.73GB 2026-03-30 01:15:08.906569 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 27e531b14deb 20 hours ago 1.42GB 2026-03-30 01:15:08.906583 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 b8556d3ff65e 20 hours ago 1.42GB 2026-03-30 01:15:08.906588 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 c6b9cc5969aa 20 hours ago 1.42GB 2026-03-30 01:15:08.906593 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 2a80c3261709 20 hours ago 1.17GB 2026-03-30 01:15:08.906597 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6ccf0338ba1e 20 hours ago 1GB 2026-03-30 01:15:08.906602 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 cdb2e9569889 20 hours ago 1GB 2026-03-30 01:15:08.906606 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 8c4d3720ca10 20 hours ago 1GB 2026-03-30 01:15:08.906663 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 e82ee4db0b12 20 hours ago 1.25GB 2026-03-30 01:15:08.906668 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 47d7fa36e841 20 hours ago 1.14GB 2026-03-30 01:15:08.906673 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 39255c0badab 20 hours ago 1e+03MB 2026-03-30 01:15:08.906677 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 787c3d891046 20 hours ago 995MB 2026-03-30 01:15:08.906682 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 eb6e5750013f 20 hours ago 995MB 2026-03-30 01:15:08.906686 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 bcbe0c954062 20 hours ago 1e+03MB 2026-03-30 01:15:08.906691 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 9bce105dbe67 20 hours ago 994MB 2026-03-30 01:15:08.906695 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3cc6be150a73 20 hours ago 995MB 2026-03-30 01:15:08.906700 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 277253602b1a 20 hours ago 1.22GB 2026-03-30 01:15:08.906705 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4a428829c478 20 hours ago 1.38GB 2026-03-30 01:15:08.906709 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b8084d706d3b 20 hours ago 1.22GB 2026-03-30 01:15:08.906719 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 09e818143343 20 hours ago 1.22GB 2026-03-30 01:15:08.906724 | orchestrator | registry.osism.tech/osism/ceph-daemon reef b4f4bc508824 21 hours ago 1.35GB 2026-03-30 01:15:09.046560 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-30 01:15:09.047604 | orchestrator | ++ semver latest 5.0.0 2026-03-30 01:15:09.098249 | orchestrator | 2026-03-30 01:15:09.098301 | orchestrator | ## Containers @ testbed-node-2 2026-03-30 01:15:09.098310 | orchestrator | 2026-03-30 01:15:09.098317 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-30 01:15:09.098324 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 01:15:09.098330 | orchestrator | + echo 2026-03-30 01:15:09.098336 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-30 01:15:09.098344 | orchestrator | + echo 2026-03-30 01:15:09.098351 | orchestrator | + osism container testbed-node-2 ps 2026-03-30 01:15:10.638491 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-30 01:15:10.638595 | orchestrator | 6d9eb3a5fe25 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-30 01:15:10.638606 | orchestrator | b6f8c6a2c6cc registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-30 01:15:10.638655 | orchestrator | ee093ed60b1b registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-30 01:15:10.638664 | orchestrator | 85fff0b8f731 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-30 01:15:10.638675 | orchestrator | 7b2cc24575a0 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-30 01:15:10.638682 | orchestrator | 27b3159bb75c registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-30 01:15:10.638689 | orchestrator | 903173d724fb registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-03-30 01:15:10.638695 | orchestrator | 8d879a253277 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-03-30 01:15:10.638702 | orchestrator | 5a8863bbbed5 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-03-30 01:15:10.638709 | orchestrator | ae05c44b6cb0 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-30 01:15:10.638715 | orchestrator | 7c7b0714bb81 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-30 01:15:10.638721 | orchestrator | 9fb81873d286 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-03-30 01:15:10.638728 | orchestrator | 83169c37480f registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-03-30 01:15:10.638736 | orchestrator | b11c7cd1766c registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-30 01:15:10.638762 | orchestrator | 14e3c4c2e68a registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-30 01:15:10.638793 | orchestrator | 6865d9218028 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-30 01:15:10.638801 | orchestrator | 5199289d33b2 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-30 01:15:10.638807 | orchestrator | 2b6a77e2d1b4 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-03-30 01:15:10.638813 | orchestrator | 9769c14e5cee registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-30 01:15:10.638820 | orchestrator | 2599e4cef530 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-30 01:15:10.638826 | orchestrator | 762858fa101b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-30 01:15:10.638849 | orchestrator | b09b31cca114 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-30 01:15:10.638855 | orchestrator | 6ad973c45ba5 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-03-30 01:15:10.638861 | orchestrator | f37efda1ed71 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-30 01:15:10.638867 | orchestrator | ed50908d85bd registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-30 01:15:10.638873 | orchestrator | 5cbba8e7c13f registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-30 01:15:10.638876 | orchestrator | e4236e3428f5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-30 01:15:10.638880 | orchestrator | c94c09a83dea registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-03-30 01:15:10.638886 | orchestrator | f9d82e2e4b28 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-03-30 01:15:10.638893 | orchestrator | c6c1205f878f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-30 01:15:10.638899 | orchestrator | bf17a319a8d3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-30 01:15:10.638905 | orchestrator | 3e01784d9a64 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-30 01:15:10.638912 | orchestrator | 2dbe01746cb8 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-30 01:15:10.638918 | orchestrator | cef3f52725a1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2026-03-30 01:15:10.638931 | orchestrator | 8b7844abeab4 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-30 01:15:10.638938 | orchestrator | 10484aa576d3 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-03-30 01:15:10.638944 | orchestrator | 5722a8863704 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-30 01:15:10.638950 | orchestrator | 2f316f5efb93 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-03-30 01:15:10.638957 | orchestrator | 25d4d9e4fc01 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-03-30 01:15:10.638963 | orchestrator | 80d3d9aea05c registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 19 minutes (healthy) mariadb 2026-03-30 01:15:10.638969 | orchestrator | 833a5d5539ff registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-03-30 01:15:10.638974 | orchestrator | 9cdbebb206ee registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-30 01:15:10.638981 | orchestrator | f7565cc844c2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-03-30 01:15:10.638987 | orchestrator | 046e9840ad56 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-30 01:15:10.638999 | orchestrator | d3c74da7634e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-30 01:15:10.639006 | orchestrator | ee4b327f1a96 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-03-30 01:15:10.639013 | orchestrator | 2e8563d42ea8 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2026-03-30 01:15:10.639019 | orchestrator | 53619985bd03 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2026-03-30 01:15:10.639026 | orchestrator | 52f5ac5072e0 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-03-30 01:15:10.639037 | orchestrator | 2f1512518355 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-03-30 01:15:10.639044 | orchestrator | d2bd5a9c760e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-03-30 01:15:10.639050 | orchestrator | d57c9bb6a735 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-30 01:15:10.639056 | orchestrator | 8dab30a17c2b registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-30 01:15:10.639063 | orchestrator | 93e2134e430d registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-30 01:15:10.639081 | orchestrator | 93da316b4bb2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-03-30 01:15:10.639087 | orchestrator | 356e11a9cb48 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-03-30 01:15:10.639094 | orchestrator | cee469f0a0f8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-30 01:15:10.639101 | orchestrator | 6ea1d03cca7b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-30 01:15:10.639107 | orchestrator | a59f3dbcec4f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-30 01:15:10.774950 | orchestrator | 2026-03-30 01:15:10.775056 | orchestrator | ## Images @ testbed-node-2 2026-03-30 01:15:10.775066 | orchestrator | 2026-03-30 01:15:10.775071 | orchestrator | + echo 2026-03-30 01:15:10.775076 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-30 01:15:10.775080 | orchestrator | + echo 2026-03-30 01:15:10.775085 | orchestrator | + osism container testbed-node-2 images 2026-03-30 01:15:12.248598 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-30 01:15:12.248708 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 9cfce92a0f75 20 hours ago 287MB 2026-03-30 01:15:12.248715 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f40f68b0ffca 20 hours ago 1.54GB 2026-03-30 01:15:12.248752 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 d0b034b15d80 20 hours ago 1.57GB 2026-03-30 01:15:12.248758 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a48e41e42567 20 hours ago 590MB 2026-03-30 01:15:12.248762 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 5b504ae41198 20 hours ago 277MB 2026-03-30 01:15:12.248766 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 1fa6055c0f03 20 hours ago 427MB 2026-03-30 01:15:12.248769 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 682fb0e09905 20 hours ago 1.04GB 2026-03-30 01:15:12.248773 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 d297e18fd448 20 hours ago 333MB 2026-03-30 01:15:12.248777 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b0ed8fd8634a 20 hours ago 679MB 2026-03-30 01:15:12.248781 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8f956ae648f4 20 hours ago 277MB 2026-03-30 01:15:12.248785 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 22608b4515e9 20 hours ago 285MB 2026-03-30 01:15:12.248791 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a232515de8c1 20 hours ago 290MB 2026-03-30 01:15:12.248797 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 946763b90b9c 20 hours ago 290MB 2026-03-30 01:15:12.248803 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 1015b84f4bc5 20 hours ago 284MB 2026-03-30 01:15:12.248808 | orchestrator | registry.osism.tech/kolla/redis 2024.2 af206e379f36 20 hours ago 284MB 2026-03-30 01:15:12.248815 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 52fef9efef08 20 hours ago 1.16GB 2026-03-30 01:15:12.248821 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 e1535e391577 20 hours ago 463MB 2026-03-30 01:15:12.248826 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 6288a60905f4 20 hours ago 309MB 2026-03-30 01:15:12.248832 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b915b783a81f 20 hours ago 368MB 2026-03-30 01:15:12.248856 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 51a181b67679 20 hours ago 303MB 2026-03-30 01:15:12.248863 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e19aae3261fc 20 hours ago 312MB 2026-03-30 01:15:12.248868 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 dc1f484af6ce 20 hours ago 317MB 2026-03-30 01:15:12.248874 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 7b55e5fce6e1 20 hours ago 851MB 2026-03-30 01:15:12.248879 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 483b8029c699 20 hours ago 851MB 2026-03-30 01:15:12.248958 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d5fd6e3acbdd 20 hours ago 851MB 2026-03-30 01:15:12.248967 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 934e976828e0 20 hours ago 851MB 2026-03-30 01:15:12.248973 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 4a568b6f8264 20 hours ago 1.08GB 2026-03-30 01:15:12.248979 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 7ee451eaeac5 20 hours ago 1.05GB 2026-03-30 01:15:12.248985 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 7b9c7496f625 20 hours ago 1.05GB 2026-03-30 01:15:12.248991 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 c37fa13049ac 20 hours ago 1.06GB 2026-03-30 01:15:12.248997 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a8b39d6da64b 20 hours ago 1.06GB 2026-03-30 01:15:12.249003 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 13f6bf0f0dcf 20 hours ago 1.04GB 2026-03-30 01:15:12.249010 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 8b653ea42b41 20 hours ago 1.04GB 2026-03-30 01:15:12.249016 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 b64e7116242d 20 hours ago 1.04GB 2026-03-30 01:15:12.249022 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 7f9661000e8a 20 hours ago 986MB 2026-03-30 01:15:12.249028 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 3cef035d4600 20 hours ago 1.11GB 2026-03-30 01:15:12.249034 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 fc1276fc9f4e 20 hours ago 1.73GB 2026-03-30 01:15:12.249040 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 27e531b14deb 20 hours ago 1.42GB 2026-03-30 01:15:12.249047 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 b8556d3ff65e 20 hours ago 1.42GB 2026-03-30 01:15:12.249053 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 c6b9cc5969aa 20 hours ago 1.42GB 2026-03-30 01:15:12.249060 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 2a80c3261709 20 hours ago 1.17GB 2026-03-30 01:15:12.249066 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6ccf0338ba1e 20 hours ago 1GB 2026-03-30 01:15:12.249072 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 cdb2e9569889 20 hours ago 1GB 2026-03-30 01:15:12.249078 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 8c4d3720ca10 20 hours ago 1GB 2026-03-30 01:15:12.249084 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 e82ee4db0b12 20 hours ago 1.25GB 2026-03-30 01:15:12.249088 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 47d7fa36e841 20 hours ago 1.14GB 2026-03-30 01:15:12.249092 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 39255c0badab 20 hours ago 1e+03MB 2026-03-30 01:15:12.249096 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 787c3d891046 20 hours ago 995MB 2026-03-30 01:15:12.249105 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 eb6e5750013f 20 hours ago 995MB 2026-03-30 01:15:12.249116 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 bcbe0c954062 20 hours ago 1e+03MB 2026-03-30 01:15:12.249120 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 9bce105dbe67 20 hours ago 994MB 2026-03-30 01:15:12.249124 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3cc6be150a73 20 hours ago 995MB 2026-03-30 01:15:12.249128 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 277253602b1a 20 hours ago 1.22GB 2026-03-30 01:15:12.249131 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4a428829c478 20 hours ago 1.38GB 2026-03-30 01:15:12.249135 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b8084d706d3b 20 hours ago 1.22GB 2026-03-30 01:15:12.249139 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 09e818143343 20 hours ago 1.22GB 2026-03-30 01:15:12.249143 | orchestrator | registry.osism.tech/osism/ceph-daemon reef b4f4bc508824 21 hours ago 1.35GB 2026-03-30 01:15:12.391372 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-30 01:15:12.400779 | orchestrator | + set -e 2026-03-30 01:15:12.400847 | orchestrator | + source /opt/manager-vars.sh 2026-03-30 01:15:12.401512 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-30 01:15:12.401589 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-30 01:15:12.401597 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-30 01:15:12.401603 | orchestrator | ++ CEPH_VERSION=reef 2026-03-30 01:15:12.401682 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-30 01:15:12.401692 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-30 01:15:12.401698 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 01:15:12.401705 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 01:15:12.401713 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-30 01:15:12.401719 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-30 01:15:12.401726 | orchestrator | ++ export ARA=false 2026-03-30 01:15:12.401731 | orchestrator | ++ ARA=false 2026-03-30 01:15:12.401735 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-30 01:15:12.401739 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-30 01:15:12.401743 | orchestrator | ++ export TEMPEST=true 2026-03-30 01:15:12.401747 | orchestrator | ++ TEMPEST=true 2026-03-30 01:15:12.401837 | orchestrator | ++ export IS_ZUUL=true 2026-03-30 01:15:12.401844 | orchestrator | ++ IS_ZUUL=true 2026-03-30 01:15:12.401848 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 01:15:12.401852 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 01:15:12.401856 | orchestrator | ++ export EXTERNAL_API=false 2026-03-30 01:15:12.401860 | orchestrator | ++ EXTERNAL_API=false 2026-03-30 01:15:12.401864 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-30 01:15:12.401868 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-30 01:15:12.401872 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-30 01:15:12.401876 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-30 01:15:12.401880 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-30 01:15:12.401884 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-30 01:15:12.401995 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-30 01:15:12.402003 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-30 01:15:12.413106 | orchestrator | + set -e 2026-03-30 01:15:12.413174 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-30 01:15:12.413181 | orchestrator | ++ export INTERACTIVE=false 2026-03-30 01:15:12.413187 | orchestrator | ++ INTERACTIVE=false 2026-03-30 01:15:12.413191 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-30 01:15:12.413196 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-30 01:15:12.413244 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-30 01:15:12.414604 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-30 01:15:12.422070 | orchestrator | 2026-03-30 01:15:12.422179 | orchestrator | # Ceph status 2026-03-30 01:15:12.422196 | orchestrator | 2026-03-30 01:15:12.422207 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 01:15:12.422220 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 01:15:12.422232 | orchestrator | + echo 2026-03-30 01:15:12.422243 | orchestrator | + echo '# Ceph status' 2026-03-30 01:15:12.422255 | orchestrator | + echo 2026-03-30 01:15:12.422267 | orchestrator | + ceph -s 2026-03-30 01:15:12.976355 | orchestrator | cluster: 2026-03-30 01:15:12.976444 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-30 01:15:12.976451 | orchestrator | health: HEALTH_OK 2026-03-30 01:15:12.976456 | orchestrator | 2026-03-30 01:15:12.976461 | orchestrator | services: 2026-03-30 01:15:12.976465 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2026-03-30 01:15:12.976471 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-0, testbed-node-1 2026-03-30 01:15:12.976476 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-30 01:15:12.976480 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2026-03-30 01:15:12.976484 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-30 01:15:12.976488 | orchestrator | 2026-03-30 01:15:12.976492 | orchestrator | data: 2026-03-30 01:15:12.976496 | orchestrator | volumes: 1/1 healthy 2026-03-30 01:15:12.976500 | orchestrator | pools: 14 pools, 401 pgs 2026-03-30 01:15:12.976504 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-30 01:15:12.976508 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-30 01:15:12.976512 | orchestrator | pgs: 401 active+clean 2026-03-30 01:15:12.976516 | orchestrator | 2026-03-30 01:15:13.024963 | orchestrator | 2026-03-30 01:15:13.025052 | orchestrator | # Ceph versions 2026-03-30 01:15:13.025061 | orchestrator | 2026-03-30 01:15:13.025068 | orchestrator | + echo 2026-03-30 01:15:13.025076 | orchestrator | + echo '# Ceph versions' 2026-03-30 01:15:13.025084 | orchestrator | + echo 2026-03-30 01:15:13.025091 | orchestrator | + ceph versions 2026-03-30 01:15:13.582332 | orchestrator | { 2026-03-30 01:15:13.582426 | orchestrator | "mon": { 2026-03-30 01:15:13.582437 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-30 01:15:13.582445 | orchestrator | }, 2026-03-30 01:15:13.582461 | orchestrator | "mgr": { 2026-03-30 01:15:13.582493 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-30 01:15:13.582500 | orchestrator | }, 2026-03-30 01:15:13.582506 | orchestrator | "osd": { 2026-03-30 01:15:13.582513 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-03-30 01:15:13.582519 | orchestrator | }, 2026-03-30 01:15:13.582522 | orchestrator | "mds": { 2026-03-30 01:15:13.582526 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-30 01:15:13.582530 | orchestrator | }, 2026-03-30 01:15:13.582534 | orchestrator | "rgw": { 2026-03-30 01:15:13.582539 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-30 01:15:13.582543 | orchestrator | }, 2026-03-30 01:15:13.582547 | orchestrator | "overall": { 2026-03-30 01:15:13.582551 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-03-30 01:15:13.582555 | orchestrator | } 2026-03-30 01:15:13.582559 | orchestrator | } 2026-03-30 01:15:13.627783 | orchestrator | 2026-03-30 01:15:13.627863 | orchestrator | # Ceph OSD tree 2026-03-30 01:15:13.627872 | orchestrator | 2026-03-30 01:15:13.627876 | orchestrator | + echo 2026-03-30 01:15:13.627881 | orchestrator | + echo '# Ceph OSD tree' 2026-03-30 01:15:13.627886 | orchestrator | + echo 2026-03-30 01:15:13.627890 | orchestrator | + ceph osd df tree 2026-03-30 01:15:14.160205 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-30 01:15:14.160296 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-03-30 01:15:14.160303 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-03-30 01:15:14.160307 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.69 0.96 186 up osd.0 2026-03-30 01:15:14.160312 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.14 1.04 202 up osd.4 2026-03-30 01:15:14.160316 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-30 01:15:14.160320 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.34 1.07 209 up osd.1 2026-03-30 01:15:14.160324 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.50 0.93 181 up osd.5 2026-03-30 01:15:14.160345 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-30 01:15:14.160349 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.04 1.02 192 up osd.2 2026-03-30 01:15:14.160353 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.79 0.98 200 up osd.3 2026-03-30 01:15:14.160357 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-03-30 01:15:14.160361 | orchestrator | MIN/MAX VAR: 0.93/1.07 STDDEV: 0.28 2026-03-30 01:15:14.205527 | orchestrator | 2026-03-30 01:15:14.205610 | orchestrator | # Ceph monitor status 2026-03-30 01:15:14.205662 | orchestrator | 2026-03-30 01:15:14.205670 | orchestrator | + echo 2026-03-30 01:15:14.205677 | orchestrator | + echo '# Ceph monitor status' 2026-03-30 01:15:14.205684 | orchestrator | + echo 2026-03-30 01:15:14.205690 | orchestrator | + ceph mon stat 2026-03-30 01:15:14.773973 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-30 01:15:14.805683 | orchestrator | 2026-03-30 01:15:14.805772 | orchestrator | # Ceph quorum status 2026-03-30 01:15:14.805782 | orchestrator | 2026-03-30 01:15:14.805790 | orchestrator | + echo 2026-03-30 01:15:14.805797 | orchestrator | + echo '# Ceph quorum status' 2026-03-30 01:15:14.805805 | orchestrator | + echo 2026-03-30 01:15:14.805812 | orchestrator | + ceph quorum_status 2026-03-30 01:15:14.805819 | orchestrator | + jq 2026-03-30 01:15:15.398324 | orchestrator | { 2026-03-30 01:15:15.398406 | orchestrator | "election_epoch": 8, 2026-03-30 01:15:15.398414 | orchestrator | "quorum": [ 2026-03-30 01:15:15.398419 | orchestrator | 0, 2026-03-30 01:15:15.398423 | orchestrator | 1, 2026-03-30 01:15:15.398427 | orchestrator | 2 2026-03-30 01:15:15.398431 | orchestrator | ], 2026-03-30 01:15:15.398435 | orchestrator | "quorum_names": [ 2026-03-30 01:15:15.398439 | orchestrator | "testbed-node-0", 2026-03-30 01:15:15.398443 | orchestrator | "testbed-node-1", 2026-03-30 01:15:15.398447 | orchestrator | "testbed-node-2" 2026-03-30 01:15:15.398451 | orchestrator | ], 2026-03-30 01:15:15.398455 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-30 01:15:15.398460 | orchestrator | "quorum_age": 1568, 2026-03-30 01:15:15.398464 | orchestrator | "features": { 2026-03-30 01:15:15.398468 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-30 01:15:15.398471 | orchestrator | "quorum_mon": [ 2026-03-30 01:15:15.398475 | orchestrator | "kraken", 2026-03-30 01:15:15.398479 | orchestrator | "luminous", 2026-03-30 01:15:15.398483 | orchestrator | "mimic", 2026-03-30 01:15:15.398487 | orchestrator | "osdmap-prune", 2026-03-30 01:15:15.398490 | orchestrator | "nautilus", 2026-03-30 01:15:15.398494 | orchestrator | "octopus", 2026-03-30 01:15:15.398498 | orchestrator | "pacific", 2026-03-30 01:15:15.398502 | orchestrator | "elector-pinging", 2026-03-30 01:15:15.398505 | orchestrator | "quincy", 2026-03-30 01:15:15.398509 | orchestrator | "reef" 2026-03-30 01:15:15.398513 | orchestrator | ] 2026-03-30 01:15:15.398517 | orchestrator | }, 2026-03-30 01:15:15.398520 | orchestrator | "monmap": { 2026-03-30 01:15:15.398524 | orchestrator | "epoch": 1, 2026-03-30 01:15:15.398528 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-30 01:15:15.398532 | orchestrator | "modified": "2026-03-30T00:48:50.380001Z", 2026-03-30 01:15:15.398536 | orchestrator | "created": "2026-03-30T00:48:50.380001Z", 2026-03-30 01:15:15.398540 | orchestrator | "min_mon_release": 18, 2026-03-30 01:15:15.398543 | orchestrator | "min_mon_release_name": "reef", 2026-03-30 01:15:15.398547 | orchestrator | "election_strategy": 1, 2026-03-30 01:15:15.398551 | orchestrator | "disallowed_leaders": "", 2026-03-30 01:15:15.398554 | orchestrator | "stretch_mode": false, 2026-03-30 01:15:15.398558 | orchestrator | "tiebreaker_mon": "", 2026-03-30 01:15:15.398562 | orchestrator | "removed_ranks": "", 2026-03-30 01:15:15.398565 | orchestrator | "features": { 2026-03-30 01:15:15.398570 | orchestrator | "persistent": [ 2026-03-30 01:15:15.398574 | orchestrator | "kraken", 2026-03-30 01:15:15.398577 | orchestrator | "luminous", 2026-03-30 01:15:15.398581 | orchestrator | "mimic", 2026-03-30 01:15:15.398585 | orchestrator | "osdmap-prune", 2026-03-30 01:15:15.398606 | orchestrator | "nautilus", 2026-03-30 01:15:15.398610 | orchestrator | "octopus", 2026-03-30 01:15:15.398614 | orchestrator | "pacific", 2026-03-30 01:15:15.398665 | orchestrator | "elector-pinging", 2026-03-30 01:15:15.398669 | orchestrator | "quincy", 2026-03-30 01:15:15.398673 | orchestrator | "reef" 2026-03-30 01:15:15.398677 | orchestrator | ], 2026-03-30 01:15:15.398681 | orchestrator | "optional": [] 2026-03-30 01:15:15.398691 | orchestrator | }, 2026-03-30 01:15:15.398695 | orchestrator | "mons": [ 2026-03-30 01:15:15.398698 | orchestrator | { 2026-03-30 01:15:15.398702 | orchestrator | "rank": 0, 2026-03-30 01:15:15.398706 | orchestrator | "name": "testbed-node-0", 2026-03-30 01:15:15.398710 | orchestrator | "public_addrs": { 2026-03-30 01:15:15.398713 | orchestrator | "addrvec": [ 2026-03-30 01:15:15.398717 | orchestrator | { 2026-03-30 01:15:15.398721 | orchestrator | "type": "v2", 2026-03-30 01:15:15.398725 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-30 01:15:15.398728 | orchestrator | "nonce": 0 2026-03-30 01:15:15.398732 | orchestrator | }, 2026-03-30 01:15:15.398736 | orchestrator | { 2026-03-30 01:15:15.398740 | orchestrator | "type": "v1", 2026-03-30 01:15:15.398744 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-30 01:15:15.398748 | orchestrator | "nonce": 0 2026-03-30 01:15:15.398752 | orchestrator | } 2026-03-30 01:15:15.398756 | orchestrator | ] 2026-03-30 01:15:15.398759 | orchestrator | }, 2026-03-30 01:15:15.398763 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-30 01:15:15.398767 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-30 01:15:15.398771 | orchestrator | "priority": 0, 2026-03-30 01:15:15.398775 | orchestrator | "weight": 0, 2026-03-30 01:15:15.398826 | orchestrator | "crush_location": "{}" 2026-03-30 01:15:15.398836 | orchestrator | }, 2026-03-30 01:15:15.398842 | orchestrator | { 2026-03-30 01:15:15.398848 | orchestrator | "rank": 1, 2026-03-30 01:15:15.398854 | orchestrator | "name": "testbed-node-1", 2026-03-30 01:15:15.398860 | orchestrator | "public_addrs": { 2026-03-30 01:15:15.398867 | orchestrator | "addrvec": [ 2026-03-30 01:15:15.398872 | orchestrator | { 2026-03-30 01:15:15.398878 | orchestrator | "type": "v2", 2026-03-30 01:15:15.398884 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-30 01:15:15.398890 | orchestrator | "nonce": 0 2026-03-30 01:15:15.398895 | orchestrator | }, 2026-03-30 01:15:15.398901 | orchestrator | { 2026-03-30 01:15:15.398907 | orchestrator | "type": "v1", 2026-03-30 01:15:15.398913 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-30 01:15:15.398919 | orchestrator | "nonce": 0 2026-03-30 01:15:15.398925 | orchestrator | } 2026-03-30 01:15:15.398931 | orchestrator | ] 2026-03-30 01:15:15.398937 | orchestrator | }, 2026-03-30 01:15:15.398958 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-30 01:15:15.398965 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-30 01:15:15.398971 | orchestrator | "priority": 0, 2026-03-30 01:15:15.398977 | orchestrator | "weight": 0, 2026-03-30 01:15:15.398983 | orchestrator | "crush_location": "{}" 2026-03-30 01:15:15.398989 | orchestrator | }, 2026-03-30 01:15:15.398995 | orchestrator | { 2026-03-30 01:15:15.399001 | orchestrator | "rank": 2, 2026-03-30 01:15:15.399008 | orchestrator | "name": "testbed-node-2", 2026-03-30 01:15:15.399015 | orchestrator | "public_addrs": { 2026-03-30 01:15:15.399022 | orchestrator | "addrvec": [ 2026-03-30 01:15:15.399028 | orchestrator | { 2026-03-30 01:15:15.399035 | orchestrator | "type": "v2", 2026-03-30 01:15:15.399041 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-30 01:15:15.399048 | orchestrator | "nonce": 0 2026-03-30 01:15:15.399055 | orchestrator | }, 2026-03-30 01:15:15.399061 | orchestrator | { 2026-03-30 01:15:15.399067 | orchestrator | "type": "v1", 2026-03-30 01:15:15.399073 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-30 01:15:15.399080 | orchestrator | "nonce": 0 2026-03-30 01:15:15.399085 | orchestrator | } 2026-03-30 01:15:15.399089 | orchestrator | ] 2026-03-30 01:15:15.399093 | orchestrator | }, 2026-03-30 01:15:15.399097 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-30 01:15:15.399100 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-30 01:15:15.399104 | orchestrator | "priority": 0, 2026-03-30 01:15:15.399108 | orchestrator | "weight": 0, 2026-03-30 01:15:15.399112 | orchestrator | "crush_location": "{}" 2026-03-30 01:15:15.399123 | orchestrator | } 2026-03-30 01:15:15.399127 | orchestrator | ] 2026-03-30 01:15:15.399130 | orchestrator | } 2026-03-30 01:15:15.399134 | orchestrator | } 2026-03-30 01:15:15.399216 | orchestrator | 2026-03-30 01:15:15.399227 | orchestrator | # Ceph free space status 2026-03-30 01:15:15.399233 | orchestrator | 2026-03-30 01:15:15.399238 | orchestrator | + echo 2026-03-30 01:15:15.399244 | orchestrator | + echo '# Ceph free space status' 2026-03-30 01:15:15.399250 | orchestrator | + echo 2026-03-30 01:15:15.399256 | orchestrator | + ceph df 2026-03-30 01:15:15.932436 | orchestrator | --- RAW STORAGE --- 2026-03-30 01:15:15.932500 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-30 01:15:15.932518 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-30 01:15:15.932525 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-30 01:15:15.932534 | orchestrator | 2026-03-30 01:15:15.932542 | orchestrator | --- POOLS --- 2026-03-30 01:15:15.932550 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-30 01:15:15.932558 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-30 01:15:15.932565 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-30 01:15:15.932573 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-30 01:15:15.932581 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-30 01:15:15.932588 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-30 01:15:15.932596 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-30 01:15:15.932603 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-30 01:15:15.932611 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-30 01:15:15.932648 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-03-30 01:15:15.932657 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-30 01:15:15.932664 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-30 01:15:15.932673 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.89 35 GiB 2026-03-30 01:15:15.932681 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-30 01:15:15.932689 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-30 01:15:15.963753 | orchestrator | ++ semver latest 5.0.0 2026-03-30 01:15:16.021573 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-30 01:15:16.021641 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-30 01:15:16.021648 | orchestrator | + osism apply facts 2026-03-30 01:15:27.275593 | orchestrator | 2026-03-30 01:15:27 | INFO  | Prepare task for execution of facts. 2026-03-30 01:15:27.351083 | orchestrator | 2026-03-30 01:15:27 | INFO  | Task 3b392de9-0f18-4b6f-9b25-19b6ba0dbc1f (facts) was prepared for execution. 2026-03-30 01:15:27.351155 | orchestrator | 2026-03-30 01:15:27 | INFO  | It takes a moment until task 3b392de9-0f18-4b6f-9b25-19b6ba0dbc1f (facts) has been started and output is visible here. 2026-03-30 01:15:38.606681 | orchestrator | 2026-03-30 01:15:38.606739 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-30 01:15:38.606745 | orchestrator | 2026-03-30 01:15:38.606749 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-30 01:15:38.606752 | orchestrator | Monday 30 March 2026 01:15:29 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-03-30 01:15:38.606756 | orchestrator | ok: [testbed-manager] 2026-03-30 01:15:38.606759 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:15:38.606762 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:15:38.606766 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:15:38.606769 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:15:38.606772 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:15:38.606775 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:15:38.606778 | orchestrator | 2026-03-30 01:15:38.606782 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-30 01:15:38.606797 | orchestrator | Monday 30 March 2026 01:15:31 +0000 (0:00:01.257) 0:00:01.560 ********** 2026-03-30 01:15:38.606801 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:15:38.606810 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:15:38.606813 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:15:38.606817 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:15:38.606820 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:15:38.606823 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:15:38.606826 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:15:38.606829 | orchestrator | 2026-03-30 01:15:38.606832 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-30 01:15:38.606835 | orchestrator | 2026-03-30 01:15:38.606838 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-30 01:15:38.606841 | orchestrator | Monday 30 March 2026 01:15:32 +0000 (0:00:01.050) 0:00:02.611 ********** 2026-03-30 01:15:38.606844 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:15:38.606847 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:15:38.606850 | orchestrator | ok: [testbed-manager] 2026-03-30 01:15:38.606853 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:15:38.606856 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:15:38.606859 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:15:38.606862 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:15:38.606865 | orchestrator | 2026-03-30 01:15:38.606869 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-30 01:15:38.606872 | orchestrator | 2026-03-30 01:15:38.606875 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-30 01:15:38.606878 | orchestrator | Monday 30 March 2026 01:15:37 +0000 (0:00:05.357) 0:00:07.969 ********** 2026-03-30 01:15:38.606881 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:15:38.606884 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:15:38.606887 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:15:38.606890 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:15:38.606893 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:15:38.606896 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:15:38.606899 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:15:38.606903 | orchestrator | 2026-03-30 01:15:38.606906 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:15:38.606909 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606913 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606916 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606919 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606922 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606925 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606928 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:15:38.606931 | orchestrator | 2026-03-30 01:15:38.606934 | orchestrator | 2026-03-30 01:15:38.606937 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:15:38.606941 | orchestrator | Monday 30 March 2026 01:15:38 +0000 (0:00:00.729) 0:00:08.698 ********** 2026-03-30 01:15:38.606944 | orchestrator | =============================================================================== 2026-03-30 01:15:38.606947 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.36s 2026-03-30 01:15:38.606953 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2026-03-30 01:15:38.606956 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-03-30 01:15:38.606959 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.73s 2026-03-30 01:15:38.786407 | orchestrator | + osism validate ceph-mons 2026-03-30 01:16:09.666579 | orchestrator | 2026-03-30 01:16:09.666665 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-30 01:16:09.666673 | orchestrator | 2026-03-30 01:16:09.666678 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-30 01:16:09.666682 | orchestrator | Monday 30 March 2026 01:15:53 +0000 (0:00:00.542) 0:00:00.542 ********** 2026-03-30 01:16:09.666747 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:09.666753 | orchestrator | 2026-03-30 01:16:09.666757 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-30 01:16:09.666762 | orchestrator | Monday 30 March 2026 01:15:54 +0000 (0:00:01.026) 0:00:01.569 ********** 2026-03-30 01:16:09.666766 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:09.666770 | orchestrator | 2026-03-30 01:16:09.666774 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-30 01:16:09.666778 | orchestrator | Monday 30 March 2026 01:15:55 +0000 (0:00:00.690) 0:00:02.259 ********** 2026-03-30 01:16:09.666783 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.666788 | orchestrator | 2026-03-30 01:16:09.666792 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-30 01:16:09.666796 | orchestrator | Monday 30 March 2026 01:15:55 +0000 (0:00:00.118) 0:00:02.378 ********** 2026-03-30 01:16:09.666800 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.666804 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:09.666808 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:09.666812 | orchestrator | 2026-03-30 01:16:09.666815 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-30 01:16:09.666819 | orchestrator | Monday 30 March 2026 01:15:55 +0000 (0:00:00.281) 0:00:02.659 ********** 2026-03-30 01:16:09.666823 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:09.666827 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:09.666831 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.666835 | orchestrator | 2026-03-30 01:16:09.666839 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-30 01:16:09.666843 | orchestrator | Monday 30 March 2026 01:15:57 +0000 (0:00:01.402) 0:00:04.062 ********** 2026-03-30 01:16:09.666847 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.666851 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:16:09.666855 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:16:09.666859 | orchestrator | 2026-03-30 01:16:09.666863 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-30 01:16:09.666867 | orchestrator | Monday 30 March 2026 01:15:57 +0000 (0:00:00.279) 0:00:04.341 ********** 2026-03-30 01:16:09.666871 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.666875 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:09.666879 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:09.666882 | orchestrator | 2026-03-30 01:16:09.666886 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:16:09.666890 | orchestrator | Monday 30 March 2026 01:15:57 +0000 (0:00:00.313) 0:00:04.655 ********** 2026-03-30 01:16:09.666894 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.666897 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:09.666901 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:09.666905 | orchestrator | 2026-03-30 01:16:09.666909 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-30 01:16:09.666913 | orchestrator | Monday 30 March 2026 01:15:58 +0000 (0:00:00.300) 0:00:04.955 ********** 2026-03-30 01:16:09.666917 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.666935 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:16:09.666939 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:16:09.666943 | orchestrator | 2026-03-30 01:16:09.666947 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-30 01:16:09.666950 | orchestrator | Monday 30 March 2026 01:15:58 +0000 (0:00:00.435) 0:00:05.390 ********** 2026-03-30 01:16:09.666954 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.666958 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:09.666962 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:09.666966 | orchestrator | 2026-03-30 01:16:09.666982 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-30 01:16:09.666986 | orchestrator | Monday 30 March 2026 01:15:58 +0000 (0:00:00.280) 0:00:05.671 ********** 2026-03-30 01:16:09.666990 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.666994 | orchestrator | 2026-03-30 01:16:09.666997 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-30 01:16:09.667001 | orchestrator | Monday 30 March 2026 01:15:59 +0000 (0:00:00.253) 0:00:05.925 ********** 2026-03-30 01:16:09.667005 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667009 | orchestrator | 2026-03-30 01:16:09.667013 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-30 01:16:09.667017 | orchestrator | Monday 30 March 2026 01:15:59 +0000 (0:00:00.260) 0:00:06.185 ********** 2026-03-30 01:16:09.667021 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667024 | orchestrator | 2026-03-30 01:16:09.667028 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:09.667032 | orchestrator | Monday 30 March 2026 01:15:59 +0000 (0:00:00.236) 0:00:06.422 ********** 2026-03-30 01:16:09.667036 | orchestrator | 2026-03-30 01:16:09.667040 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:09.667044 | orchestrator | Monday 30 March 2026 01:15:59 +0000 (0:00:00.068) 0:00:06.490 ********** 2026-03-30 01:16:09.667047 | orchestrator | 2026-03-30 01:16:09.667051 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:09.667055 | orchestrator | Monday 30 March 2026 01:15:59 +0000 (0:00:00.068) 0:00:06.559 ********** 2026-03-30 01:16:09.667059 | orchestrator | 2026-03-30 01:16:09.667062 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-30 01:16:09.667066 | orchestrator | Monday 30 March 2026 01:16:00 +0000 (0:00:00.213) 0:00:06.772 ********** 2026-03-30 01:16:09.667070 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667074 | orchestrator | 2026-03-30 01:16:09.667078 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-30 01:16:09.667081 | orchestrator | Monday 30 March 2026 01:16:00 +0000 (0:00:00.278) 0:00:07.051 ********** 2026-03-30 01:16:09.667085 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667089 | orchestrator | 2026-03-30 01:16:09.667104 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-30 01:16:09.667108 | orchestrator | Monday 30 March 2026 01:16:00 +0000 (0:00:00.281) 0:00:07.333 ********** 2026-03-30 01:16:09.667112 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667116 | orchestrator | 2026-03-30 01:16:09.667119 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-30 01:16:09.667123 | orchestrator | Monday 30 March 2026 01:16:00 +0000 (0:00:00.116) 0:00:07.449 ********** 2026-03-30 01:16:09.667127 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:16:09.667131 | orchestrator | 2026-03-30 01:16:09.667134 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-30 01:16:09.667138 | orchestrator | Monday 30 March 2026 01:16:02 +0000 (0:00:01.805) 0:00:09.255 ********** 2026-03-30 01:16:09.667143 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667147 | orchestrator | 2026-03-30 01:16:09.667152 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-30 01:16:09.667156 | orchestrator | Monday 30 March 2026 01:16:02 +0000 (0:00:00.325) 0:00:09.580 ********** 2026-03-30 01:16:09.667166 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667170 | orchestrator | 2026-03-30 01:16:09.667175 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-30 01:16:09.667212 | orchestrator | Monday 30 March 2026 01:16:02 +0000 (0:00:00.121) 0:00:09.702 ********** 2026-03-30 01:16:09.667216 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667220 | orchestrator | 2026-03-30 01:16:09.667225 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-30 01:16:09.667229 | orchestrator | Monday 30 March 2026 01:16:03 +0000 (0:00:00.307) 0:00:10.009 ********** 2026-03-30 01:16:09.667236 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667241 | orchestrator | 2026-03-30 01:16:09.667245 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-30 01:16:09.667250 | orchestrator | Monday 30 March 2026 01:16:03 +0000 (0:00:00.285) 0:00:10.295 ********** 2026-03-30 01:16:09.667254 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667259 | orchestrator | 2026-03-30 01:16:09.667263 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-30 01:16:09.667267 | orchestrator | Monday 30 March 2026 01:16:03 +0000 (0:00:00.103) 0:00:10.399 ********** 2026-03-30 01:16:09.667272 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667276 | orchestrator | 2026-03-30 01:16:09.667280 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-30 01:16:09.667285 | orchestrator | Monday 30 March 2026 01:16:03 +0000 (0:00:00.124) 0:00:10.523 ********** 2026-03-30 01:16:09.667289 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667293 | orchestrator | 2026-03-30 01:16:09.667298 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-30 01:16:09.667302 | orchestrator | Monday 30 March 2026 01:16:04 +0000 (0:00:00.280) 0:00:10.804 ********** 2026-03-30 01:16:09.667307 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:16:09.667311 | orchestrator | 2026-03-30 01:16:09.667315 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-30 01:16:09.667320 | orchestrator | Monday 30 March 2026 01:16:05 +0000 (0:00:01.576) 0:00:12.381 ********** 2026-03-30 01:16:09.667324 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667328 | orchestrator | 2026-03-30 01:16:09.667333 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-30 01:16:09.667337 | orchestrator | Monday 30 March 2026 01:16:05 +0000 (0:00:00.301) 0:00:12.682 ********** 2026-03-30 01:16:09.667342 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667347 | orchestrator | 2026-03-30 01:16:09.667354 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-30 01:16:09.667360 | orchestrator | Monday 30 March 2026 01:16:06 +0000 (0:00:00.129) 0:00:12.811 ********** 2026-03-30 01:16:09.667366 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:09.667372 | orchestrator | 2026-03-30 01:16:09.667382 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-30 01:16:09.667395 | orchestrator | Monday 30 March 2026 01:16:06 +0000 (0:00:00.142) 0:00:12.954 ********** 2026-03-30 01:16:09.667401 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667407 | orchestrator | 2026-03-30 01:16:09.667414 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-30 01:16:09.667420 | orchestrator | Monday 30 March 2026 01:16:06 +0000 (0:00:00.143) 0:00:13.097 ********** 2026-03-30 01:16:09.667426 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667431 | orchestrator | 2026-03-30 01:16:09.667437 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-30 01:16:09.667446 | orchestrator | Monday 30 March 2026 01:16:06 +0000 (0:00:00.131) 0:00:13.228 ********** 2026-03-30 01:16:09.667452 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:09.667458 | orchestrator | 2026-03-30 01:16:09.667464 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-30 01:16:09.667470 | orchestrator | Monday 30 March 2026 01:16:06 +0000 (0:00:00.248) 0:00:13.477 ********** 2026-03-30 01:16:09.667482 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:09.667490 | orchestrator | 2026-03-30 01:16:09.667500 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-30 01:16:09.667507 | orchestrator | Monday 30 March 2026 01:16:06 +0000 (0:00:00.247) 0:00:13.724 ********** 2026-03-30 01:16:09.667513 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:09.667519 | orchestrator | 2026-03-30 01:16:09.667525 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-30 01:16:09.667531 | orchestrator | Monday 30 March 2026 01:16:08 +0000 (0:00:01.828) 0:00:15.553 ********** 2026-03-30 01:16:09.667538 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:09.667544 | orchestrator | 2026-03-30 01:16:09.667551 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-30 01:16:09.667557 | orchestrator | Monday 30 March 2026 01:16:09 +0000 (0:00:00.254) 0:00:15.807 ********** 2026-03-30 01:16:09.667564 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:09.667570 | orchestrator | 2026-03-30 01:16:09.667582 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:11.930105 | orchestrator | Monday 30 March 2026 01:16:09 +0000 (0:00:00.605) 0:00:16.413 ********** 2026-03-30 01:16:11.930203 | orchestrator | 2026-03-30 01:16:11.930211 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:11.930216 | orchestrator | Monday 30 March 2026 01:16:09 +0000 (0:00:00.071) 0:00:16.484 ********** 2026-03-30 01:16:11.930220 | orchestrator | 2026-03-30 01:16:11.930224 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:11.930228 | orchestrator | Monday 30 March 2026 01:16:09 +0000 (0:00:00.071) 0:00:16.555 ********** 2026-03-30 01:16:11.930232 | orchestrator | 2026-03-30 01:16:11.930236 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-30 01:16:11.930240 | orchestrator | Monday 30 March 2026 01:16:09 +0000 (0:00:00.072) 0:00:16.628 ********** 2026-03-30 01:16:11.930244 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:11.930248 | orchestrator | 2026-03-30 01:16:11.930252 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-30 01:16:11.930256 | orchestrator | Monday 30 March 2026 01:16:11 +0000 (0:00:01.326) 0:00:17.954 ********** 2026-03-30 01:16:11.930260 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-30 01:16:11.930264 | orchestrator |  "msg": [ 2026-03-30 01:16:11.930269 | orchestrator |  "Validator run completed.", 2026-03-30 01:16:11.930273 | orchestrator |  "You can find the report file here:", 2026-03-30 01:16:11.930277 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-30T01:15:54+00:00-report.json", 2026-03-30 01:16:11.930282 | orchestrator |  "on the following host:", 2026-03-30 01:16:11.930286 | orchestrator |  "testbed-manager" 2026-03-30 01:16:11.930289 | orchestrator |  ] 2026-03-30 01:16:11.930294 | orchestrator | } 2026-03-30 01:16:11.930297 | orchestrator | 2026-03-30 01:16:11.930301 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:16:11.930306 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-30 01:16:11.930312 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:16:11.930316 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:16:11.930320 | orchestrator | 2026-03-30 01:16:11.930324 | orchestrator | 2026-03-30 01:16:11.930328 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:16:11.930332 | orchestrator | Monday 30 March 2026 01:16:11 +0000 (0:00:00.425) 0:00:18.380 ********** 2026-03-30 01:16:11.930358 | orchestrator | =============================================================================== 2026-03-30 01:16:11.930362 | orchestrator | Aggregate test results step one ----------------------------------------- 1.83s 2026-03-30 01:16:11.930367 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.81s 2026-03-30 01:16:11.930370 | orchestrator | Gather status data ------------------------------------------------------ 1.58s 2026-03-30 01:16:11.930374 | orchestrator | Get container info ------------------------------------------------------ 1.40s 2026-03-30 01:16:11.930378 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2026-03-30 01:16:11.930382 | orchestrator | Get timestamp for report file ------------------------------------------- 1.03s 2026-03-30 01:16:11.930386 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-03-30 01:16:11.930389 | orchestrator | Aggregate test results step three --------------------------------------- 0.61s 2026-03-30 01:16:11.930393 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.44s 2026-03-30 01:16:11.930397 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-03-30 01:16:11.930400 | orchestrator | Flush handlers ---------------------------------------------------------- 0.35s 2026-03-30 01:16:11.930404 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2026-03-30 01:16:11.930408 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-03-30 01:16:11.930411 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2026-03-30 01:16:11.930415 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-03-30 01:16:11.930419 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-30 01:16:11.930422 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-03-30 01:16:11.930426 | orchestrator | Fail due to missing containers ------------------------------------------ 0.28s 2026-03-30 01:16:11.930430 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-03-30 01:16:11.930434 | orchestrator | Prepare status test vars ------------------------------------------------ 0.28s 2026-03-30 01:16:12.123881 | orchestrator | + osism validate ceph-mgrs 2026-03-30 01:16:41.130865 | orchestrator | 2026-03-30 01:16:41.130954 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-30 01:16:41.130966 | orchestrator | 2026-03-30 01:16:41.130973 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-30 01:16:41.130980 | orchestrator | Monday 30 March 2026 01:16:27 +0000 (0:00:00.532) 0:00:00.532 ********** 2026-03-30 01:16:41.130987 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.130993 | orchestrator | 2026-03-30 01:16:41.130999 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-30 01:16:41.131005 | orchestrator | Monday 30 March 2026 01:16:28 +0000 (0:00:01.007) 0:00:01.540 ********** 2026-03-30 01:16:41.131012 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.131018 | orchestrator | 2026-03-30 01:16:41.131024 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-30 01:16:41.131031 | orchestrator | Monday 30 March 2026 01:16:28 +0000 (0:00:00.700) 0:00:02.240 ********** 2026-03-30 01:16:41.131039 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131046 | orchestrator | 2026-03-30 01:16:41.131052 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-30 01:16:41.131076 | orchestrator | Monday 30 March 2026 01:16:29 +0000 (0:00:00.120) 0:00:02.360 ********** 2026-03-30 01:16:41.131082 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131088 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:41.131093 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:41.131099 | orchestrator | 2026-03-30 01:16:41.131106 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-30 01:16:41.131111 | orchestrator | Monday 30 March 2026 01:16:29 +0000 (0:00:00.294) 0:00:02.655 ********** 2026-03-30 01:16:41.131135 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:41.131141 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131147 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:41.131153 | orchestrator | 2026-03-30 01:16:41.131159 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-30 01:16:41.131165 | orchestrator | Monday 30 March 2026 01:16:30 +0000 (0:00:01.533) 0:00:04.189 ********** 2026-03-30 01:16:41.131171 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131177 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:16:41.131183 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:16:41.131189 | orchestrator | 2026-03-30 01:16:41.131202 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-30 01:16:41.131208 | orchestrator | Monday 30 March 2026 01:16:31 +0000 (0:00:00.279) 0:00:04.468 ********** 2026-03-30 01:16:41.131214 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131220 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:41.131226 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:41.131232 | orchestrator | 2026-03-30 01:16:41.131238 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:16:41.131244 | orchestrator | Monday 30 March 2026 01:16:31 +0000 (0:00:00.310) 0:00:04.779 ********** 2026-03-30 01:16:41.131250 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131256 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:41.131261 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:41.131267 | orchestrator | 2026-03-30 01:16:41.131273 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-30 01:16:41.131279 | orchestrator | Monday 30 March 2026 01:16:31 +0000 (0:00:00.302) 0:00:05.081 ********** 2026-03-30 01:16:41.131285 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131291 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:16:41.131297 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:16:41.131303 | orchestrator | 2026-03-30 01:16:41.131308 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-30 01:16:41.131314 | orchestrator | Monday 30 March 2026 01:16:32 +0000 (0:00:00.447) 0:00:05.529 ********** 2026-03-30 01:16:41.131320 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131326 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:16:41.131332 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:16:41.131337 | orchestrator | 2026-03-30 01:16:41.131342 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-30 01:16:41.131348 | orchestrator | Monday 30 March 2026 01:16:32 +0000 (0:00:00.317) 0:00:05.846 ********** 2026-03-30 01:16:41.131354 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131360 | orchestrator | 2026-03-30 01:16:41.131366 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-30 01:16:41.131373 | orchestrator | Monday 30 March 2026 01:16:32 +0000 (0:00:00.247) 0:00:06.093 ********** 2026-03-30 01:16:41.131379 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131385 | orchestrator | 2026-03-30 01:16:41.131391 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-30 01:16:41.131398 | orchestrator | Monday 30 March 2026 01:16:33 +0000 (0:00:00.246) 0:00:06.340 ********** 2026-03-30 01:16:41.131404 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131410 | orchestrator | 2026-03-30 01:16:41.131417 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:41.131422 | orchestrator | Monday 30 March 2026 01:16:33 +0000 (0:00:00.255) 0:00:06.595 ********** 2026-03-30 01:16:41.131428 | orchestrator | 2026-03-30 01:16:41.131434 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:41.131440 | orchestrator | Monday 30 March 2026 01:16:33 +0000 (0:00:00.069) 0:00:06.665 ********** 2026-03-30 01:16:41.131446 | orchestrator | 2026-03-30 01:16:41.131453 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:41.131458 | orchestrator | Monday 30 March 2026 01:16:33 +0000 (0:00:00.068) 0:00:06.733 ********** 2026-03-30 01:16:41.131473 | orchestrator | 2026-03-30 01:16:41.131478 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-30 01:16:41.131484 | orchestrator | Monday 30 March 2026 01:16:33 +0000 (0:00:00.227) 0:00:06.961 ********** 2026-03-30 01:16:41.131491 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131496 | orchestrator | 2026-03-30 01:16:41.131502 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-30 01:16:41.131508 | orchestrator | Monday 30 March 2026 01:16:33 +0000 (0:00:00.257) 0:00:07.219 ********** 2026-03-30 01:16:41.131514 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131520 | orchestrator | 2026-03-30 01:16:41.131543 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-30 01:16:41.131549 | orchestrator | Monday 30 March 2026 01:16:34 +0000 (0:00:00.252) 0:00:07.472 ********** 2026-03-30 01:16:41.131555 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131561 | orchestrator | 2026-03-30 01:16:41.131567 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-30 01:16:41.131572 | orchestrator | Monday 30 March 2026 01:16:34 +0000 (0:00:00.123) 0:00:07.595 ********** 2026-03-30 01:16:41.131577 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:16:41.131582 | orchestrator | 2026-03-30 01:16:41.131587 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-30 01:16:41.131593 | orchestrator | Monday 30 March 2026 01:16:35 +0000 (0:00:01.544) 0:00:09.139 ********** 2026-03-30 01:16:41.131598 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131604 | orchestrator | 2026-03-30 01:16:41.131609 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-30 01:16:41.131615 | orchestrator | Monday 30 March 2026 01:16:36 +0000 (0:00:00.243) 0:00:09.383 ********** 2026-03-30 01:16:41.131620 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131626 | orchestrator | 2026-03-30 01:16:41.131631 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-30 01:16:41.131636 | orchestrator | Monday 30 March 2026 01:16:36 +0000 (0:00:00.295) 0:00:09.678 ********** 2026-03-30 01:16:41.131641 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131646 | orchestrator | 2026-03-30 01:16:41.131652 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-30 01:16:41.131657 | orchestrator | Monday 30 March 2026 01:16:36 +0000 (0:00:00.136) 0:00:09.814 ********** 2026-03-30 01:16:41.131662 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:16:41.131668 | orchestrator | 2026-03-30 01:16:41.131673 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-30 01:16:41.131678 | orchestrator | Monday 30 March 2026 01:16:36 +0000 (0:00:00.160) 0:00:09.974 ********** 2026-03-30 01:16:41.131684 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.131690 | orchestrator | 2026-03-30 01:16:41.131696 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-30 01:16:41.131701 | orchestrator | Monday 30 March 2026 01:16:36 +0000 (0:00:00.238) 0:00:10.213 ********** 2026-03-30 01:16:41.131714 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:16:41.131720 | orchestrator | 2026-03-30 01:16:41.131794 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-30 01:16:41.131801 | orchestrator | Monday 30 March 2026 01:16:37 +0000 (0:00:00.262) 0:00:10.475 ********** 2026-03-30 01:16:41.131806 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.131813 | orchestrator | 2026-03-30 01:16:41.131819 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-30 01:16:41.131824 | orchestrator | Monday 30 March 2026 01:16:38 +0000 (0:00:01.528) 0:00:12.004 ********** 2026-03-30 01:16:41.131830 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.131837 | orchestrator | 2026-03-30 01:16:41.131843 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-30 01:16:41.131849 | orchestrator | Monday 30 March 2026 01:16:38 +0000 (0:00:00.267) 0:00:12.272 ********** 2026-03-30 01:16:41.131864 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.131870 | orchestrator | 2026-03-30 01:16:41.131876 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:41.131881 | orchestrator | Monday 30 March 2026 01:16:39 +0000 (0:00:00.258) 0:00:12.530 ********** 2026-03-30 01:16:41.131886 | orchestrator | 2026-03-30 01:16:41.131891 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:41.131897 | orchestrator | Monday 30 March 2026 01:16:39 +0000 (0:00:00.069) 0:00:12.600 ********** 2026-03-30 01:16:41.131902 | orchestrator | 2026-03-30 01:16:41.131908 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:16:41.131913 | orchestrator | Monday 30 March 2026 01:16:39 +0000 (0:00:00.083) 0:00:12.683 ********** 2026-03-30 01:16:41.131919 | orchestrator | 2026-03-30 01:16:41.131925 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-30 01:16:41.131931 | orchestrator | Monday 30 March 2026 01:16:39 +0000 (0:00:00.074) 0:00:12.757 ********** 2026-03-30 01:16:41.131937 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-30 01:16:41.131942 | orchestrator | 2026-03-30 01:16:41.131948 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-30 01:16:41.131953 | orchestrator | Monday 30 March 2026 01:16:40 +0000 (0:00:01.279) 0:00:14.037 ********** 2026-03-30 01:16:41.131959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-30 01:16:41.131965 | orchestrator |  "msg": [ 2026-03-30 01:16:41.131972 | orchestrator |  "Validator run completed.", 2026-03-30 01:16:41.131978 | orchestrator |  "You can find the report file here:", 2026-03-30 01:16:41.131984 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-30T01:16:28+00:00-report.json", 2026-03-30 01:16:41.131992 | orchestrator |  "on the following host:", 2026-03-30 01:16:41.131999 | orchestrator |  "testbed-manager" 2026-03-30 01:16:41.132005 | orchestrator |  ] 2026-03-30 01:16:41.132010 | orchestrator | } 2026-03-30 01:16:41.132017 | orchestrator | 2026-03-30 01:16:41.132023 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:16:41.132031 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-30 01:16:41.132039 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:16:41.132071 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:16:41.432042 | orchestrator | 2026-03-30 01:16:41.432115 | orchestrator | 2026-03-30 01:16:41.432121 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:16:41.432127 | orchestrator | Monday 30 March 2026 01:16:41 +0000 (0:00:00.393) 0:00:14.431 ********** 2026-03-30 01:16:41.432131 | orchestrator | =============================================================================== 2026-03-30 01:16:41.432135 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.54s 2026-03-30 01:16:41.432139 | orchestrator | Get container info ------------------------------------------------------ 1.53s 2026-03-30 01:16:41.432143 | orchestrator | Aggregate test results step one ----------------------------------------- 1.53s 2026-03-30 01:16:41.432147 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2026-03-30 01:16:41.432153 | orchestrator | Get timestamp for report file ------------------------------------------- 1.01s 2026-03-30 01:16:41.432158 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-03-30 01:16:41.432164 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.45s 2026-03-30 01:16:41.432170 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-03-30 01:16:41.432197 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-03-30 01:16:41.432203 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2026-03-30 01:16:41.432209 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-03-30 01:16:41.432215 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-30 01:16:41.432221 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.30s 2026-03-30 01:16:41.432226 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-03-30 01:16:41.432232 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-03-30 01:16:41.432238 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-03-30 01:16:41.432245 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-03-30 01:16:41.432251 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-03-30 01:16:41.432258 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-03-30 01:16:41.432264 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-03-30 01:16:41.614443 | orchestrator | + osism validate ceph-osds 2026-03-30 01:17:00.502890 | orchestrator | 2026-03-30 01:17:00.502995 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-30 01:17:00.503008 | orchestrator | 2026-03-30 01:17:00.503015 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-30 01:17:00.503023 | orchestrator | Monday 30 March 2026 01:16:56 +0000 (0:00:00.495) 0:00:00.495 ********** 2026-03-30 01:17:00.503030 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:00.503037 | orchestrator | 2026-03-30 01:17:00.503043 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-30 01:17:00.503047 | orchestrator | Monday 30 March 2026 01:16:57 +0000 (0:00:00.948) 0:00:01.443 ********** 2026-03-30 01:17:00.503051 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:00.503055 | orchestrator | 2026-03-30 01:17:00.503059 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-30 01:17:00.503064 | orchestrator | Monday 30 March 2026 01:16:57 +0000 (0:00:00.245) 0:00:01.689 ********** 2026-03-30 01:17:00.503067 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:00.503071 | orchestrator | 2026-03-30 01:17:00.503075 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-30 01:17:00.503079 | orchestrator | Monday 30 March 2026 01:16:58 +0000 (0:00:00.671) 0:00:02.360 ********** 2026-03-30 01:17:00.503083 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:00.503088 | orchestrator | 2026-03-30 01:17:00.503092 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-30 01:17:00.503096 | orchestrator | Monday 30 March 2026 01:16:58 +0000 (0:00:00.123) 0:00:02.484 ********** 2026-03-30 01:17:00.503101 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:00.503107 | orchestrator | 2026-03-30 01:17:00.503113 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-30 01:17:00.503119 | orchestrator | Monday 30 March 2026 01:16:58 +0000 (0:00:00.119) 0:00:02.603 ********** 2026-03-30 01:17:00.503126 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:00.503132 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:00.503138 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:00.503144 | orchestrator | 2026-03-30 01:17:00.503151 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-30 01:17:00.503156 | orchestrator | Monday 30 March 2026 01:16:59 +0000 (0:00:00.443) 0:00:03.047 ********** 2026-03-30 01:17:00.503160 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:00.503164 | orchestrator | 2026-03-30 01:17:00.503167 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-30 01:17:00.503195 | orchestrator | Monday 30 March 2026 01:16:59 +0000 (0:00:00.153) 0:00:03.201 ********** 2026-03-30 01:17:00.503203 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:00.503209 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:00.503216 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:00.503222 | orchestrator | 2026-03-30 01:17:00.503227 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-30 01:17:00.503233 | orchestrator | Monday 30 March 2026 01:16:59 +0000 (0:00:00.322) 0:00:03.524 ********** 2026-03-30 01:17:00.503239 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:00.503245 | orchestrator | 2026-03-30 01:17:00.503268 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:17:00.503273 | orchestrator | Monday 30 March 2026 01:16:59 +0000 (0:00:00.367) 0:00:03.891 ********** 2026-03-30 01:17:00.503277 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:00.503281 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:00.503285 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:00.503289 | orchestrator | 2026-03-30 01:17:00.503293 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-30 01:17:00.503297 | orchestrator | Monday 30 March 2026 01:17:00 +0000 (0:00:00.305) 0:00:04.197 ********** 2026-03-30 01:17:00.503303 | orchestrator | skipping: [testbed-node-3] => (item={'id': '669151e8a64a87de5a260aef95bf9970606f8efeae45616d6212eab290a44ffb', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-30 01:17:00.503309 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c29ab89a06675cc191f0ec375083e54635cf14c7b2fc7b691dbb940bd73b37bc', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.503315 | orchestrator | skipping: [testbed-node-3] => (item={'id': '460a08ac6f25e77467b4d4d3699ba9ed09d24ca3689e1bae5afe73491b14eea3', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.503321 | orchestrator | skipping: [testbed-node-3] => (item={'id': '41a7c23548423ba09861448811f09a8a8fe5728136a01523a18956aaa7d6ac40', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.503334 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6360c5b0c04a1bea333fdf3f07cdca7c61f8966edb50412ba6a69acc3bd3b451', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-30 01:17:00.503349 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3f703ef98d80cc3a2be026e9cc63ec20a1c7c8a3a685a85d328f717a8ef6ad83', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-30 01:17:00.503353 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bbf197428ca6c99bb63351fa235401952057830822353c81ccaffbe355c2bd27', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-30 01:17:00.503357 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a3f93bc1d7d8c755faecc37af59f5f5e816f88b6c2b1055e29f0227d8c6afe0e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-30 01:17:00.503361 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9966e5f12e7ee85a241e13e16db1352e7f11a47271b84e550f222c2bcaa2108a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-30 01:17:00.503365 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7db2e1eb3b250ba1391b432c363f21183d44c22a373cc889d5f2bfdb0fdce471', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-30 01:17:00.503374 | orchestrator | ok: [testbed-node-3] => (item={'id': 'a3952d285b5f18e204eac221389e291ef718b88da72938726bbf9b3163069efd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-30 01:17:00.503379 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c31ff8fa1c952c189589e7cac775822d9603e0a926a7094df557d3c0d7328563', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-30 01:17:00.503384 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cd7ceac51ad8f080bac9c07858e7f1eb0f0bae1cc4ce425bfe4fd11e3147b99a', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-30 01:17:00.503388 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3cde730eef08a15f967460d8398fd84ce57b6e4ddfc3d2b1c1c26c9421a24c67', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-30 01:17:00.503393 | orchestrator | skipping: [testbed-node-3] => (item={'id': '50313a8999beef635270660857772b6f0a5d984cb570adf2aa0b3b68d9e00e34', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-30 01:17:00.503398 | orchestrator | skipping: [testbed-node-3] => (item={'id': '82697cc8009908365973c74a94cb1ba47ee1fb694d0beb2930a59d2b0971a2ff', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-30 01:17:00.503402 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7b5063e792afa8776a59bc346751c80b51070ecb53b4b825d6722b63d136ef37', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-30 01:17:00.503407 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f59100060d1a227ec432bfc85bd02d1d86f8c0b53763bd8542d590fe113209db', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-30 01:17:00.503411 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0a910f228a2191a182b26e666a71275113911fd3b9f90ef214cd69b9213847bb', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-30 01:17:00.503416 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6394c31d01ce1848b706aa47b6dc2c9a7a23732e5cb7a46dbe53c9b8b47ca244', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.503424 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a77338186f68eb37800f7c8ed5e95b7f524ca02c0cd87d20b56ecebb8ab3d78', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.503440 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e91b8086542337a52a080ae2ec6f300086b385ddb065ffbb61fdcc8b284c78f2', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.654554 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b96caf6954e1907e71fc9c84d48f5a4f618351573b79cb266714f9b20d42836c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-30 01:17:00.654647 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9628524c55efd0dce60fc363337cc7a259b5d181cac84de756e864437224895b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-30 01:17:00.654683 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3946827a1d8bb6075b20e0f95d5eb1dda7a95765fb8638641c292d483ecb2474', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-30 01:17:00.654692 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5791e8e9c7d8e0194a2ff851748bd4ffc92887832f49da71e412b91cc2b4c59', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-30 01:17:00.654698 | orchestrator | skipping: [testbed-node-4] => (item={'id': '45cceeafc59b8652d6e03b093422e76a85ca4793bf574b73416b90284b822ca3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-30 01:17:00.654729 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c9dddd0b3b11655a439a370efccef24f8289f9adcdbcdab0534f05befe5ee10b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-30 01:17:00.654741 | orchestrator | ok: [testbed-node-4] => (item={'id': '5d6a729711d1de1d07c7e11a7f0b291496dddf33f05cb4af77353e7a1f0ec627', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-30 01:17:00.654867 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a5a2cdeeca37a9c759817a9681998a1de8387dbea97305886ac32e9917f78d78', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-30 01:17:00.654876 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2335c267bf72fb693fed14088ea8c9841f22a2b173f45479a35f3ddf3302f1d3', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-30 01:17:00.654883 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6d00f633fe3cc5ef63914a748e82f483d6944c4fdb9bc1fedc8f38c8f5badf9e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-30 01:17:00.654890 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9c48158034ed447b43d8f2358a7e6bd13006ce0abe7c1f50c8466fc8a4943b42', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-30 01:17:00.654897 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4adc0a4324a1cce05fa0557d691e98591b201e7634f1d587e6615921a6c56879', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-30 01:17:00.654903 | orchestrator | skipping: [testbed-node-4] => (item={'id': '866c6f90c5488f222917cd2d14136ec83658a17972343dc2e2598c1e632fed35', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-30 01:17:00.654910 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb9bc2e8d48c3d41f5c440c9d3f40253d98696dffb69f426281c8a65d7190a56', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-30 01:17:00.654917 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e05e12570c58a652383adf8665b50bd5eb7f2cf8107cd3022a5aae0ebaf77b6c', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-30 01:17:00.654942 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac1540cd0ea3f3316a43a327cad8f34cec3640a50a5d7b110d3f7a2dc6499e33', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.654958 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f7b20664a0cb414bfa8081a8f9d5c8761b15900be60cb9418b796133a7343997', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.654964 | orchestrator | skipping: [testbed-node-5] => (item={'id': '45095933dc7a7883a9ef7a629bf18d902dfb5fb0565adb224f8bc340f75dd206', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-30 01:17:00.654971 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b1e8cd8fd82894f9b0224146da8408d52e7c72cb7be20e5a956c929708c3b4a8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-30 01:17:00.654977 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97812736b0d1b539398395663e9e8db82da8cbdbf67263e8c48a20f6a1922e91', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-30 01:17:00.654984 | orchestrator | skipping: [testbed-node-5] => (item={'id': '041288ad4203f1be6cb67dff304ef9236befd2e5747055c4e07666b722198bb0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-30 01:17:00.654990 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e85af8dbe2b17d3847c4ee5fae70fef197500a9456d5803ebeb439ae34863f8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-30 01:17:00.654996 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8aefc7452df4e22493d2c8488a4b56fa33ef0f69c3924eb0dfc62fb3cfd2611', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-30 01:17:00.655002 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4f6266ef771761b8651e504b3835f6f70b3f6f0d5dc54bf73ad1be7a7d4a88e8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-30 01:17:00.655029 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c2ea55c6b5efeaff1a780d6459188d0e8b32cb0c12bdc1140f8295f776ab7d06', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-30 01:17:00.655036 | orchestrator | ok: [testbed-node-5] => (item={'id': '5a7d5fa3cfccdaad850fec2b31389af7e1aae33b36dfc69d2f31593a9e32b41c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-30 01:17:00.655043 | orchestrator | skipping: [testbed-node-5] => (item={'id': '34ef045daaab92c713490d7bbb6cc1c2ef5968acd79b81c8f1f661e1886b56f9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-30 01:17:00.655049 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2c94ed26564994f3ca623a6e927ca45e30eb18f08af359d82d9c0d500f1db924', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-30 01:17:00.655055 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63bf9a3bcdb2c93222ccc47b1a207ad6df6dbf6465335fdc492928d11ed50875', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-30 01:17:00.655066 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd1a671775174c7cbd4341c893b760366f7a09b2658054ec65405f072626532f1', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-30 01:17:00.655077 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9366188dcd91dbe6b55f2ebd26d3908de551050ddf5a28b595a9a601981451f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-30 01:17:00.655089 | orchestrator | skipping: [testbed-node-5] => (item={'id': '458edd7d655ed6f429cfd19bd06cd4d4da4810d1869e993fd5d0df323d619c20', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-30 01:17:13.490464 | orchestrator | 2026-03-30 01:17:13.490540 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-30 01:17:13.490548 | orchestrator | Monday 30 March 2026 01:17:00 +0000 (0:00:00.619) 0:00:04.817 ********** 2026-03-30 01:17:13.490552 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.490557 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.490562 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.490566 | orchestrator | 2026-03-30 01:17:13.490570 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-30 01:17:13.490576 | orchestrator | Monday 30 March 2026 01:17:01 +0000 (0:00:00.298) 0:00:05.115 ********** 2026-03-30 01:17:13.490582 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.490589 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.490598 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.490606 | orchestrator | 2026-03-30 01:17:13.490614 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-30 01:17:13.490620 | orchestrator | Monday 30 March 2026 01:17:01 +0000 (0:00:00.296) 0:00:05.412 ********** 2026-03-30 01:17:13.490627 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.490633 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.490639 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.490645 | orchestrator | 2026-03-30 01:17:13.490651 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:17:13.490658 | orchestrator | Monday 30 March 2026 01:17:01 +0000 (0:00:00.299) 0:00:05.712 ********** 2026-03-30 01:17:13.490664 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.490670 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.490676 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.490682 | orchestrator | 2026-03-30 01:17:13.490687 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-30 01:17:13.490695 | orchestrator | Monday 30 March 2026 01:17:02 +0000 (0:00:00.449) 0:00:06.161 ********** 2026-03-30 01:17:13.490701 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-30 01:17:13.490709 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-30 01:17:13.490716 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.490722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-30 01:17:13.490729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-30 01:17:13.490736 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.490742 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-30 01:17:13.490749 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-30 01:17:13.490756 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.490850 | orchestrator | 2026-03-30 01:17:13.490858 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-30 01:17:13.490863 | orchestrator | Monday 30 March 2026 01:17:02 +0000 (0:00:00.305) 0:00:06.467 ********** 2026-03-30 01:17:13.490867 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.490871 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.490893 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.490897 | orchestrator | 2026-03-30 01:17:13.490901 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-30 01:17:13.490905 | orchestrator | Monday 30 March 2026 01:17:02 +0000 (0:00:00.298) 0:00:06.766 ********** 2026-03-30 01:17:13.490909 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.490913 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.490918 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.490922 | orchestrator | 2026-03-30 01:17:13.490926 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-30 01:17:13.490930 | orchestrator | Monday 30 March 2026 01:17:03 +0000 (0:00:00.302) 0:00:07.068 ********** 2026-03-30 01:17:13.490934 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.490937 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.490941 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.490945 | orchestrator | 2026-03-30 01:17:13.490949 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-30 01:17:13.490953 | orchestrator | Monday 30 March 2026 01:17:03 +0000 (0:00:00.440) 0:00:07.509 ********** 2026-03-30 01:17:13.490957 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.490961 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.490965 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.490969 | orchestrator | 2026-03-30 01:17:13.490973 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-30 01:17:13.490977 | orchestrator | Monday 30 March 2026 01:17:03 +0000 (0:00:00.307) 0:00:07.817 ********** 2026-03-30 01:17:13.490981 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.490985 | orchestrator | 2026-03-30 01:17:13.490989 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-30 01:17:13.490993 | orchestrator | Monday 30 March 2026 01:17:04 +0000 (0:00:00.232) 0:00:08.049 ********** 2026-03-30 01:17:13.491008 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491012 | orchestrator | 2026-03-30 01:17:13.491015 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-30 01:17:13.491019 | orchestrator | Monday 30 March 2026 01:17:04 +0000 (0:00:00.248) 0:00:08.297 ********** 2026-03-30 01:17:13.491023 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491027 | orchestrator | 2026-03-30 01:17:13.491032 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:17:13.491036 | orchestrator | Monday 30 March 2026 01:17:04 +0000 (0:00:00.241) 0:00:08.539 ********** 2026-03-30 01:17:13.491041 | orchestrator | 2026-03-30 01:17:13.491045 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:17:13.491049 | orchestrator | Monday 30 March 2026 01:17:04 +0000 (0:00:00.066) 0:00:08.605 ********** 2026-03-30 01:17:13.491056 | orchestrator | 2026-03-30 01:17:13.491062 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:17:13.491085 | orchestrator | Monday 30 March 2026 01:17:04 +0000 (0:00:00.064) 0:00:08.669 ********** 2026-03-30 01:17:13.491091 | orchestrator | 2026-03-30 01:17:13.491097 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-30 01:17:13.491104 | orchestrator | Monday 30 March 2026 01:17:04 +0000 (0:00:00.068) 0:00:08.738 ********** 2026-03-30 01:17:13.491110 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491116 | orchestrator | 2026-03-30 01:17:13.491123 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-30 01:17:13.491130 | orchestrator | Monday 30 March 2026 01:17:05 +0000 (0:00:00.588) 0:00:09.327 ********** 2026-03-30 01:17:13.491136 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491142 | orchestrator | 2026-03-30 01:17:13.491149 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:17:13.491154 | orchestrator | Monday 30 March 2026 01:17:05 +0000 (0:00:00.254) 0:00:09.582 ********** 2026-03-30 01:17:13.491159 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491163 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.491175 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.491185 | orchestrator | 2026-03-30 01:17:13.491193 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-30 01:17:13.491199 | orchestrator | Monday 30 March 2026 01:17:05 +0000 (0:00:00.283) 0:00:09.865 ********** 2026-03-30 01:17:13.491206 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491212 | orchestrator | 2026-03-30 01:17:13.491218 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-30 01:17:13.491224 | orchestrator | Monday 30 March 2026 01:17:06 +0000 (0:00:00.249) 0:00:10.115 ********** 2026-03-30 01:17:13.491230 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-30 01:17:13.491236 | orchestrator | 2026-03-30 01:17:13.491242 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-30 01:17:13.491248 | orchestrator | Monday 30 March 2026 01:17:08 +0000 (0:00:01.840) 0:00:11.955 ********** 2026-03-30 01:17:13.491254 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491260 | orchestrator | 2026-03-30 01:17:13.491267 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-30 01:17:13.491274 | orchestrator | Monday 30 March 2026 01:17:08 +0000 (0:00:00.124) 0:00:12.079 ********** 2026-03-30 01:17:13.491279 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491286 | orchestrator | 2026-03-30 01:17:13.491291 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-30 01:17:13.491297 | orchestrator | Monday 30 March 2026 01:17:08 +0000 (0:00:00.305) 0:00:12.385 ********** 2026-03-30 01:17:13.491303 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491309 | orchestrator | 2026-03-30 01:17:13.491315 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-30 01:17:13.491321 | orchestrator | Monday 30 March 2026 01:17:08 +0000 (0:00:00.109) 0:00:12.494 ********** 2026-03-30 01:17:13.491327 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491333 | orchestrator | 2026-03-30 01:17:13.491339 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:17:13.491343 | orchestrator | Monday 30 March 2026 01:17:08 +0000 (0:00:00.136) 0:00:12.631 ********** 2026-03-30 01:17:13.491347 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491350 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.491354 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.491358 | orchestrator | 2026-03-30 01:17:13.491361 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-30 01:17:13.491365 | orchestrator | Monday 30 March 2026 01:17:09 +0000 (0:00:00.431) 0:00:13.063 ********** 2026-03-30 01:17:13.491369 | orchestrator | changed: [testbed-node-3] 2026-03-30 01:17:13.491373 | orchestrator | changed: [testbed-node-4] 2026-03-30 01:17:13.491376 | orchestrator | changed: [testbed-node-5] 2026-03-30 01:17:13.491380 | orchestrator | 2026-03-30 01:17:13.491384 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-30 01:17:13.491387 | orchestrator | Monday 30 March 2026 01:17:11 +0000 (0:00:01.875) 0:00:14.939 ********** 2026-03-30 01:17:13.491391 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491395 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.491398 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.491402 | orchestrator | 2026-03-30 01:17:13.491406 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-30 01:17:13.491410 | orchestrator | Monday 30 March 2026 01:17:11 +0000 (0:00:00.278) 0:00:15.217 ********** 2026-03-30 01:17:13.491413 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491417 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.491421 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.491424 | orchestrator | 2026-03-30 01:17:13.491428 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-30 01:17:13.491432 | orchestrator | Monday 30 March 2026 01:17:12 +0000 (0:00:00.848) 0:00:16.066 ********** 2026-03-30 01:17:13.491436 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491439 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.491448 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.491452 | orchestrator | 2026-03-30 01:17:13.491456 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-30 01:17:13.491459 | orchestrator | Monday 30 March 2026 01:17:12 +0000 (0:00:00.293) 0:00:16.359 ********** 2026-03-30 01:17:13.491463 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:13.491471 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:13.491475 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:13.491479 | orchestrator | 2026-03-30 01:17:13.491483 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-30 01:17:13.491486 | orchestrator | Monday 30 March 2026 01:17:12 +0000 (0:00:00.298) 0:00:16.658 ********** 2026-03-30 01:17:13.491490 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491494 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.491497 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.491501 | orchestrator | 2026-03-30 01:17:13.491505 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-30 01:17:13.491508 | orchestrator | Monday 30 March 2026 01:17:13 +0000 (0:00:00.313) 0:00:16.971 ********** 2026-03-30 01:17:13.491512 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:13.491516 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:13.491520 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:13.491523 | orchestrator | 2026-03-30 01:17:13.491533 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-30 01:17:20.608928 | orchestrator | Monday 30 March 2026 01:17:13 +0000 (0:00:00.438) 0:00:17.410 ********** 2026-03-30 01:17:20.609024 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:20.609032 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:20.609036 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:20.609040 | orchestrator | 2026-03-30 01:17:20.609045 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-30 01:17:20.609049 | orchestrator | Monday 30 March 2026 01:17:13 +0000 (0:00:00.487) 0:00:17.897 ********** 2026-03-30 01:17:20.609054 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:20.609058 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:20.609061 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:20.609065 | orchestrator | 2026-03-30 01:17:20.609069 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-30 01:17:20.609073 | orchestrator | Monday 30 March 2026 01:17:14 +0000 (0:00:00.499) 0:00:18.396 ********** 2026-03-30 01:17:20.609077 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:20.609081 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:20.609084 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:20.609088 | orchestrator | 2026-03-30 01:17:20.609092 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-30 01:17:20.609096 | orchestrator | Monday 30 March 2026 01:17:14 +0000 (0:00:00.298) 0:00:18.695 ********** 2026-03-30 01:17:20.609100 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:20.609104 | orchestrator | skipping: [testbed-node-4] 2026-03-30 01:17:20.609108 | orchestrator | skipping: [testbed-node-5] 2026-03-30 01:17:20.609111 | orchestrator | 2026-03-30 01:17:20.609115 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-30 01:17:20.609121 | orchestrator | Monday 30 March 2026 01:17:15 +0000 (0:00:00.452) 0:00:19.147 ********** 2026-03-30 01:17:20.609127 | orchestrator | ok: [testbed-node-3] 2026-03-30 01:17:20.609133 | orchestrator | ok: [testbed-node-4] 2026-03-30 01:17:20.609139 | orchestrator | ok: [testbed-node-5] 2026-03-30 01:17:20.609145 | orchestrator | 2026-03-30 01:17:20.609151 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-30 01:17:20.609157 | orchestrator | Monday 30 March 2026 01:17:15 +0000 (0:00:00.308) 0:00:19.456 ********** 2026-03-30 01:17:20.609164 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:20.609171 | orchestrator | 2026-03-30 01:17:20.609176 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-30 01:17:20.609182 | orchestrator | Monday 30 March 2026 01:17:15 +0000 (0:00:00.253) 0:00:19.710 ********** 2026-03-30 01:17:20.609216 | orchestrator | skipping: [testbed-node-3] 2026-03-30 01:17:20.609223 | orchestrator | 2026-03-30 01:17:20.609229 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-30 01:17:20.609236 | orchestrator | Monday 30 March 2026 01:17:16 +0000 (0:00:00.236) 0:00:19.946 ********** 2026-03-30 01:17:20.609242 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:20.609249 | orchestrator | 2026-03-30 01:17:20.609255 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-30 01:17:20.609262 | orchestrator | Monday 30 March 2026 01:17:17 +0000 (0:00:01.719) 0:00:21.665 ********** 2026-03-30 01:17:20.609268 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:20.609275 | orchestrator | 2026-03-30 01:17:20.609281 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-30 01:17:20.609288 | orchestrator | Monday 30 March 2026 01:17:17 +0000 (0:00:00.258) 0:00:21.923 ********** 2026-03-30 01:17:20.609294 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:20.609301 | orchestrator | 2026-03-30 01:17:20.609305 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:17:20.609309 | orchestrator | Monday 30 March 2026 01:17:18 +0000 (0:00:00.250) 0:00:22.174 ********** 2026-03-30 01:17:20.609312 | orchestrator | 2026-03-30 01:17:20.609316 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:17:20.609320 | orchestrator | Monday 30 March 2026 01:17:18 +0000 (0:00:00.213) 0:00:22.387 ********** 2026-03-30 01:17:20.609324 | orchestrator | 2026-03-30 01:17:20.609327 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-30 01:17:20.609331 | orchestrator | Monday 30 March 2026 01:17:18 +0000 (0:00:00.065) 0:00:22.453 ********** 2026-03-30 01:17:20.609335 | orchestrator | 2026-03-30 01:17:20.609339 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-30 01:17:20.609343 | orchestrator | Monday 30 March 2026 01:17:18 +0000 (0:00:00.069) 0:00:22.522 ********** 2026-03-30 01:17:20.609346 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-30 01:17:20.609350 | orchestrator | 2026-03-30 01:17:20.609354 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-30 01:17:20.609357 | orchestrator | Monday 30 March 2026 01:17:19 +0000 (0:00:01.279) 0:00:23.802 ********** 2026-03-30 01:17:20.609361 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-30 01:17:20.609365 | orchestrator |  "msg": [ 2026-03-30 01:17:20.609369 | orchestrator |  "Validator run completed.", 2026-03-30 01:17:20.609374 | orchestrator |  "You can find the report file here:", 2026-03-30 01:17:20.609378 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-30T01:16:57+00:00-report.json", 2026-03-30 01:17:20.609383 | orchestrator |  "on the following host:", 2026-03-30 01:17:20.609387 | orchestrator |  "testbed-manager" 2026-03-30 01:17:20.609391 | orchestrator |  ] 2026-03-30 01:17:20.609395 | orchestrator | } 2026-03-30 01:17:20.609399 | orchestrator | 2026-03-30 01:17:20.609403 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:17:20.609408 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-30 01:17:20.609413 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-30 01:17:20.609428 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-30 01:17:20.609432 | orchestrator | 2026-03-30 01:17:20.609436 | orchestrator | 2026-03-30 01:17:20.609440 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:17:20.609480 | orchestrator | Monday 30 March 2026 01:17:20 +0000 (0:00:00.421) 0:00:24.224 ********** 2026-03-30 01:17:20.609485 | orchestrator | =============================================================================== 2026-03-30 01:17:20.609490 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.88s 2026-03-30 01:17:20.609494 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.84s 2026-03-30 01:17:20.609499 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2026-03-30 01:17:20.609503 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2026-03-30 01:17:20.609507 | orchestrator | Get timestamp for report file ------------------------------------------- 0.95s 2026-03-30 01:17:20.609511 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.85s 2026-03-30 01:17:20.609515 | orchestrator | Create report output directory ------------------------------------------ 0.67s 2026-03-30 01:17:20.609520 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.62s 2026-03-30 01:17:20.609524 | orchestrator | Print report file information ------------------------------------------- 0.59s 2026-03-30 01:17:20.609528 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2026-03-30 01:17:20.609532 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-03-30 01:17:20.609537 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.45s 2026-03-30 01:17:20.609541 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-03-30 01:17:20.609545 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.44s 2026-03-30 01:17:20.609550 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.44s 2026-03-30 01:17:20.609554 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.44s 2026-03-30 01:17:20.609558 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2026-03-30 01:17:20.609563 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-03-30 01:17:20.609567 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.37s 2026-03-30 01:17:20.609571 | orchestrator | Flush handlers ---------------------------------------------------------- 0.35s 2026-03-30 01:17:20.782578 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-30 01:17:20.790203 | orchestrator | + set -e 2026-03-30 01:17:20.790322 | orchestrator | + source /opt/manager-vars.sh 2026-03-30 01:17:20.790330 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-30 01:17:20.790335 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-30 01:17:20.790339 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-30 01:17:20.790343 | orchestrator | ++ CEPH_VERSION=reef 2026-03-30 01:17:20.790348 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-30 01:17:20.790352 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-30 01:17:20.790357 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 01:17:20.790361 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 01:17:20.790365 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-30 01:17:20.790369 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-30 01:17:20.790372 | orchestrator | ++ export ARA=false 2026-03-30 01:17:20.790377 | orchestrator | ++ ARA=false 2026-03-30 01:17:20.790381 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-30 01:17:20.790385 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-30 01:17:20.790389 | orchestrator | ++ export TEMPEST=true 2026-03-30 01:17:20.790392 | orchestrator | ++ TEMPEST=true 2026-03-30 01:17:20.790435 | orchestrator | ++ export IS_ZUUL=true 2026-03-30 01:17:20.790505 | orchestrator | ++ IS_ZUUL=true 2026-03-30 01:17:20.790509 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 01:17:20.790514 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 01:17:20.790518 | orchestrator | ++ export EXTERNAL_API=false 2026-03-30 01:17:20.790522 | orchestrator | ++ EXTERNAL_API=false 2026-03-30 01:17:20.790525 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-30 01:17:20.790537 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-30 01:17:20.790541 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-30 01:17:20.790602 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-30 01:17:20.790608 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-30 01:17:20.790630 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-30 01:17:20.790634 | orchestrator | + source /etc/os-release 2026-03-30 01:17:20.790638 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-30 01:17:20.790641 | orchestrator | ++ NAME=Ubuntu 2026-03-30 01:17:20.790645 | orchestrator | ++ VERSION_ID=24.04 2026-03-30 01:17:20.790649 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-30 01:17:20.790653 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-30 01:17:20.790657 | orchestrator | ++ ID=ubuntu 2026-03-30 01:17:20.790661 | orchestrator | ++ ID_LIKE=debian 2026-03-30 01:17:20.790665 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-30 01:17:20.790669 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-30 01:17:20.790673 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-30 01:17:20.790677 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-30 01:17:20.790682 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-30 01:17:20.790686 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-30 01:17:20.790692 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-30 01:17:20.790739 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-30 01:17:20.790748 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-30 01:17:20.820834 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-30 01:17:44.055321 | orchestrator | 2026-03-30 01:17:44.055405 | orchestrator | # Status of Elasticsearch 2026-03-30 01:17:44.055416 | orchestrator | 2026-03-30 01:17:44.055423 | orchestrator | + pushd /opt/configuration/contrib 2026-03-30 01:17:44.055431 | orchestrator | + echo 2026-03-30 01:17:44.055438 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-30 01:17:44.055445 | orchestrator | + echo 2026-03-30 01:17:44.055452 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-30 01:17:44.227950 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-30 01:17:44.228016 | orchestrator | 2026-03-30 01:17:44.228022 | orchestrator | # Status of MariaDB 2026-03-30 01:17:44.228027 | orchestrator | 2026-03-30 01:17:44.228032 | orchestrator | + echo 2026-03-30 01:17:44.228036 | orchestrator | + echo '# Status of MariaDB' 2026-03-30 01:17:44.228040 | orchestrator | + echo 2026-03-30 01:17:44.228587 | orchestrator | ++ semver latest 10.0.0-0 2026-03-30 01:17:44.266623 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 01:17:44.266710 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 01:17:44.266720 | orchestrator | + osism status database 2026-03-30 01:17:45.852210 | orchestrator | 2026-03-30 01:17:45 | ERROR  | Unable to get ansible vault password 2026-03-30 01:17:45.852278 | orchestrator | 2026-03-30 01:17:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:17:45.852285 | orchestrator | 2026-03-30 01:17:45 | ERROR  | Dropping encrypted entries 2026-03-30 01:17:45.885413 | orchestrator | 2026-03-30 01:17:45 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-30 01:17:45.896621 | orchestrator | 2026-03-30 01:17:45 | INFO  | Cluster Status: Primary 2026-03-30 01:17:45.896691 | orchestrator | 2026-03-30 01:17:45 | INFO  | Connected: ON 2026-03-30 01:17:45.896697 | orchestrator | 2026-03-30 01:17:45 | INFO  | Ready: ON 2026-03-30 01:17:45.896702 | orchestrator | 2026-03-30 01:17:45 | INFO  | Cluster Size: 3 2026-03-30 01:17:45.896707 | orchestrator | 2026-03-30 01:17:45 | INFO  | Local State: Synced 2026-03-30 01:17:45.896711 | orchestrator | 2026-03-30 01:17:45 | INFO  | Cluster State UUID: f941d81b-2bd2-11f1-9f51-16ad597ddd29 2026-03-30 01:17:45.896717 | orchestrator | 2026-03-30 01:17:45 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-30 01:17:45.896722 | orchestrator | 2026-03-30 01:17:45 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-30 01:17:45.896747 | orchestrator | 2026-03-30 01:17:45 | INFO  | Local Node UUID: 2ea150de-2bd3-11f1-a345-a2f44604a5cb 2026-03-30 01:17:45.896752 | orchestrator | 2026-03-30 01:17:45 | INFO  | Flow Control Paused: 0.00% 2026-03-30 01:17:45.896756 | orchestrator | 2026-03-30 01:17:45 | INFO  | Recv Queue Avg: 0 2026-03-30 01:17:45.896760 | orchestrator | 2026-03-30 01:17:45 | INFO  | Send Queue Avg: 0.0012012 2026-03-30 01:17:45.896764 | orchestrator | 2026-03-30 01:17:45 | INFO  | Transactions: 4417 local commits, 6602 replicated, 94 received 2026-03-30 01:17:45.896768 | orchestrator | 2026-03-30 01:17:45 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-30 01:17:45.896894 | orchestrator | 2026-03-30 01:17:45 | INFO  | MariaDB Uptime: 22 minutes, 1 second 2026-03-30 01:17:45.896907 | orchestrator | 2026-03-30 01:17:45 | INFO  | Threads: 132 connected, 2 running 2026-03-30 01:17:45.896911 | orchestrator | 2026-03-30 01:17:45 | INFO  | Queries: 214271 total, 0 slow 2026-03-30 01:17:45.896915 | orchestrator | 2026-03-30 01:17:45 | INFO  | Aborted Connects: 137 2026-03-30 01:17:45.896920 | orchestrator | 2026-03-30 01:17:45 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-30 01:17:46.110838 | orchestrator | 2026-03-30 01:17:46.110923 | orchestrator | # Status of Prometheus 2026-03-30 01:17:46.110932 | orchestrator | 2026-03-30 01:17:46.110936 | orchestrator | + echo 2026-03-30 01:17:46.110941 | orchestrator | + echo '# Status of Prometheus' 2026-03-30 01:17:46.110946 | orchestrator | + echo 2026-03-30 01:17:46.110953 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-30 01:17:46.159590 | orchestrator | Unauthorized 2026-03-30 01:17:46.162690 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-30 01:17:46.222946 | orchestrator | Unauthorized 2026-03-30 01:17:46.226032 | orchestrator | 2026-03-30 01:17:46.226112 | orchestrator | # Status of RabbitMQ 2026-03-30 01:17:46.226121 | orchestrator | 2026-03-30 01:17:46.226128 | orchestrator | + echo 2026-03-30 01:17:46.226134 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-30 01:17:46.226141 | orchestrator | + echo 2026-03-30 01:17:46.226991 | orchestrator | ++ semver latest 10.0.0-0 2026-03-30 01:17:46.283102 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-30 01:17:46.283182 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 01:17:46.283192 | orchestrator | + osism status messaging 2026-03-30 01:17:53.285327 | orchestrator | 2026-03-30 01:17:53 | ERROR  | Unable to get ansible vault password 2026-03-30 01:17:53.285433 | orchestrator | 2026-03-30 01:17:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:17:53.285449 | orchestrator | 2026-03-30 01:17:53 | ERROR  | Dropping encrypted entries 2026-03-30 01:17:53.320305 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-30 01:17:53.378305 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-03-30 01:17:53.378374 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-03-30 01:17:53.378380 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-30 01:17:53.378385 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-30 01:17:53.378390 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-30 01:17:53.378395 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-30 01:17:53.378399 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-30 01:17:53.378488 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Connections: 198, Channels: 197, Queues: 173 2026-03-30 01:17:53.378511 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Messages: 236 total, 236 ready, 0 unacked 2026-03-30 01:17:53.378736 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Message Rates: 6.2/s publish, 6.6/s deliver 2026-03-30 01:17:53.378744 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Disk Free: 58.0 GB (limit: 0.0 GB) 2026-03-30 01:17:53.379167 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-30 01:17:53.379285 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] File Descriptors: 125/1024 2026-03-30 01:17:53.379352 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-0] Sockets: 79/832 2026-03-30 01:17:53.379441 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-30 01:17:53.439577 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-03-30 01:17:53.439709 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-03-30 01:17:53.439718 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-30 01:17:53.439724 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-30 01:17:53.439730 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-30 01:17:53.439743 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-30 01:17:53.439748 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-30 01:17:53.439917 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Connections: 198, Channels: 197, Queues: 173 2026-03-30 01:17:53.440138 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Messages: 236 total, 236 ready, 0 unacked 2026-03-30 01:17:53.440646 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Message Rates: 6.2/s publish, 6.6/s deliver 2026-03-30 01:17:53.440707 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-30 01:17:53.440720 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-30 01:17:53.441234 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] File Descriptors: 109/1024 2026-03-30 01:17:53.441260 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-1] Sockets: 63/832 2026-03-30 01:17:53.441270 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-30 01:17:53.502062 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-03-30 01:17:53.502149 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-03-30 01:17:53.502159 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-30 01:17:53.502167 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-30 01:17:53.502175 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-30 01:17:53.503047 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-30 01:17:53.503095 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-30 01:17:53.503100 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Connections: 198, Channels: 197, Queues: 173 2026-03-30 01:17:53.503105 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Messages: 236 total, 236 ready, 0 unacked 2026-03-30 01:17:53.503109 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Message Rates: 6.2/s publish, 6.6/s deliver 2026-03-30 01:17:53.503113 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-03-30 01:17:53.503244 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-30 01:17:53.503251 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] File Descriptors: 104/1024 2026-03-30 01:17:53.503255 | orchestrator | 2026-03-30 01:17:53 | INFO  | [testbed-node-2] Sockets: 56/832 2026-03-30 01:17:53.503259 | orchestrator | 2026-03-30 01:17:53 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-30 01:17:53.751248 | orchestrator | 2026-03-30 01:17:53.751322 | orchestrator | # Status of Redis 2026-03-30 01:17:53.751332 | orchestrator | 2026-03-30 01:17:53.751340 | orchestrator | + echo 2026-03-30 01:17:53.751347 | orchestrator | + echo '# Status of Redis' 2026-03-30 01:17:53.751355 | orchestrator | + echo 2026-03-30 01:17:53.751364 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-30 01:17:53.758764 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001834s;;;0.000000;10.000000 2026-03-30 01:17:53.758912 | orchestrator | 2026-03-30 01:17:53.758931 | orchestrator | # Create backup of MariaDB database 2026-03-30 01:17:53.758943 | orchestrator | 2026-03-30 01:17:53.758956 | orchestrator | + popd 2026-03-30 01:17:53.758969 | orchestrator | + echo 2026-03-30 01:17:53.758982 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-30 01:17:53.758995 | orchestrator | + echo 2026-03-30 01:17:53.759010 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-30 01:17:54.999887 | orchestrator | 2026-03-30 01:17:55 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-30 01:17:55.060707 | orchestrator | 2026-03-30 01:17:55 | INFO  | Task 721e21d8-d34d-4bfa-aa2e-01f474330872 (mariadb_backup) was prepared for execution. 2026-03-30 01:17:55.060781 | orchestrator | 2026-03-30 01:17:55 | INFO  | It takes a moment until task 721e21d8-d34d-4bfa-aa2e-01f474330872 (mariadb_backup) has been started and output is visible here. 2026-03-30 01:24:48.230461 | orchestrator | 2026-03-30 01:24:48.230562 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-30 01:24:48.230571 | orchestrator | 2026-03-30 01:24:48.230575 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-30 01:24:48.230580 | orchestrator | Monday 30 March 2026 01:17:58 +0000 (0:00:00.227) 0:00:00.227 ********** 2026-03-30 01:24:48.230584 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:24:48.230589 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:24:48.230593 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:24:48.230597 | orchestrator | 2026-03-30 01:24:48.230601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-30 01:24:48.230605 | orchestrator | Monday 30 March 2026 01:17:58 +0000 (0:00:00.296) 0:00:00.523 ********** 2026-03-30 01:24:48.230609 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-30 01:24:48.230615 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-30 01:24:48.230621 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-30 01:24:48.230626 | orchestrator | 2026-03-30 01:24:48.230634 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-30 01:24:48.230643 | orchestrator | 2026-03-30 01:24:48.230671 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-30 01:24:48.230678 | orchestrator | Monday 30 March 2026 01:17:58 +0000 (0:00:00.398) 0:00:00.921 ********** 2026-03-30 01:24:48.230684 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-30 01:24:48.230727 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-30 01:24:48.230733 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-30 01:24:48.230737 | orchestrator | 2026-03-30 01:24:48.230741 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-30 01:24:48.230745 | orchestrator | Monday 30 March 2026 01:17:59 +0000 (0:00:00.404) 0:00:01.326 ********** 2026-03-30 01:24:48.230749 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-30 01:24:48.230754 | orchestrator | 2026-03-30 01:24:48.230759 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-30 01:24:48.230763 | orchestrator | Monday 30 March 2026 01:17:59 +0000 (0:00:00.642) 0:00:01.969 ********** 2026-03-30 01:24:48.230766 | orchestrator | ok: [testbed-node-2] 2026-03-30 01:24:48.230770 | orchestrator | ok: [testbed-node-0] 2026-03-30 01:24:48.230774 | orchestrator | ok: [testbed-node-1] 2026-03-30 01:24:48.230778 | orchestrator | 2026-03-30 01:24:48.230782 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-30 01:24:48.230785 | orchestrator | Monday 30 March 2026 01:18:03 +0000 (0:00:03.253) 0:00:05.222 ********** 2026-03-30 01:24:48.230789 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:24:48.230794 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:24:48.230797 | orchestrator | 2026-03-30 01:24:48.230801 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230805 | orchestrator | 2026-03-30 01:24:48.230809 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230812 | orchestrator | 2026-03-30 01:24:48.230817 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230820 | orchestrator | 2026-03-30 01:24:48.230825 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230828 | orchestrator | 2026-03-30 01:24:48.230832 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230836 | orchestrator | 2026-03-30 01:24:48.230840 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230844 | orchestrator | 2026-03-30 01:24:48.230848 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230851 | orchestrator | 2026-03-30 01:24:48.230855 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230859 | orchestrator | 2026-03-30 01:24:48.230863 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-30 01:24:48.230866 | orchestrator | changed: [testbed-node-0] 2026-03-30 01:24:48.230870 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-30 01:24:48.230874 | orchestrator | 2026-03-30 01:24:48.230890 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-30 01:24:48.230894 | orchestrator | skipping: no hosts matched 2026-03-30 01:24:48.230960 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-30 01:24:48.230965 | orchestrator | 2026-03-30 01:24:48.230970 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-30 01:24:48.230973 | orchestrator | skipping: no hosts matched 2026-03-30 01:24:48.230978 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-30 01:24:48.230982 | orchestrator | mariadb_bootstrap_restart 2026-03-30 01:24:48.230987 | orchestrator | 2026-03-30 01:24:48.230992 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-30 01:24:48.230996 | orchestrator | skipping: no hosts matched 2026-03-30 01:24:48.231008 | orchestrator | 2026-03-30 01:24:48.231012 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-30 01:24:48.231017 | orchestrator | 2026-03-30 01:24:48.231021 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-30 01:24:48.231026 | orchestrator | Monday 30 March 2026 01:24:47 +0000 (0:06:44.294) 0:06:49.517 ********** 2026-03-30 01:24:48.231031 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:24:48.231035 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:24:48.231040 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:24:48.231045 | orchestrator | 2026-03-30 01:24:48.231049 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-30 01:24:48.231054 | orchestrator | Monday 30 March 2026 01:24:47 +0000 (0:00:00.297) 0:06:49.814 ********** 2026-03-30 01:24:48.231058 | orchestrator | skipping: [testbed-node-0] 2026-03-30 01:24:48.231062 | orchestrator | skipping: [testbed-node-1] 2026-03-30 01:24:48.231066 | orchestrator | skipping: [testbed-node-2] 2026-03-30 01:24:48.231070 | orchestrator | 2026-03-30 01:24:48.231074 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:24:48.231094 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-30 01:24:48.231099 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 01:24:48.231103 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 01:24:48.231107 | orchestrator | 2026-03-30 01:24:48.231111 | orchestrator | 2026-03-30 01:24:48.231115 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:24:48.231118 | orchestrator | Monday 30 March 2026 01:24:47 +0000 (0:00:00.211) 0:06:50.026 ********** 2026-03-30 01:24:48.231122 | orchestrator | =============================================================================== 2026-03-30 01:24:48.231126 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 404.29s 2026-03-30 01:24:48.231130 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.25s 2026-03-30 01:24:48.231134 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.64s 2026-03-30 01:24:48.231138 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-03-30 01:24:48.231142 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-03-30 01:24:48.231146 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-03-30 01:24:48.231150 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-30 01:24:48.231154 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-03-30 01:24:48.413604 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-30 01:24:48.424304 | orchestrator | + set -e 2026-03-30 01:24:48.424380 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-30 01:24:48.424392 | orchestrator | ++ export INTERACTIVE=false 2026-03-30 01:24:48.424400 | orchestrator | ++ INTERACTIVE=false 2026-03-30 01:24:48.424407 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-30 01:24:48.424413 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-30 01:24:48.424421 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-30 01:24:48.425724 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-30 01:24:48.432488 | orchestrator | 2026-03-30 01:24:48.432558 | orchestrator | # OpenStack endpoints 2026-03-30 01:24:48.432564 | orchestrator | 2026-03-30 01:24:48.432569 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 01:24:48.432574 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 01:24:48.432578 | orchestrator | + export OS_CLOUD=admin 2026-03-30 01:24:48.432582 | orchestrator | + OS_CLOUD=admin 2026-03-30 01:24:48.432586 | orchestrator | + echo 2026-03-30 01:24:48.432590 | orchestrator | + echo '# OpenStack endpoints' 2026-03-30 01:24:48.432612 | orchestrator | + echo 2026-03-30 01:24:48.432616 | orchestrator | + openstack endpoint list 2026-03-30 01:24:51.821798 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-30 01:24:51.821873 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-30 01:24:51.821878 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-30 01:24:51.821882 | orchestrator | | 01deec45fed74d2c978e2d32a0974729 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-30 01:24:51.821886 | orchestrator | | 09daf867a30240bd85f06b3ba9a263e1 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-30 01:24:51.821890 | orchestrator | | 13bf84b5799d40da9d1d0aa396c7510d | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-30 01:24:51.821894 | orchestrator | | 2271d2a7cb2c42188250772f6e50df9c | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-30 01:24:51.821953 | orchestrator | | 25dcab94d4d14dfca4bf5b625666e824 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-30 01:24:51.821957 | orchestrator | | 279db05a66c342f39b073af67ba7070c | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-30 01:24:51.821961 | orchestrator | | 34726496d3ce4d1cb35601af44aea264 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-30 01:24:51.821965 | orchestrator | | 35a6c90968f84dd2a2a28dbb09a0daf6 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-30 01:24:51.821969 | orchestrator | | 381a470ea275403f8900322e6773a094 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-30 01:24:51.821973 | orchestrator | | 3e80e0f2f7744fddb02b0d61bf99cf96 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-30 01:24:51.821976 | orchestrator | | 53b113ad12ba418a855826636383c430 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-30 01:24:51.821980 | orchestrator | | 5b41742b6b41412f9df5925d3c747ce0 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-30 01:24:51.821984 | orchestrator | | 637b99722c5a4567958df01ab3309b7c | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-30 01:24:51.821988 | orchestrator | | 63efa37e4b644788b95bfce6c0502c32 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-30 01:24:51.821991 | orchestrator | | 732c3fc4f2b5401ba2f7fd314e58917a | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-30 01:24:51.821995 | orchestrator | | 7ee1e0985e0748d9a8734fa2fb632504 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-30 01:24:51.821999 | orchestrator | | 90820596f6674fd1bd9e8771a653142f | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-30 01:24:51.822071 | orchestrator | | 9327939688e441cab7db903d557d0c0e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-30 01:24:51.822080 | orchestrator | | a29aefa500924b72adb57dc923f6c5a6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-30 01:24:51.822086 | orchestrator | | b485e952be39463485fb867617012560 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-30 01:24:51.822108 | orchestrator | | d0bce8cba0ea43ec9690b969fdb1ae42 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-30 01:24:51.822120 | orchestrator | | fd78bedb8d0543a6a727d05b47797a54 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-30 01:24:51.822127 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-30 01:24:52.101672 | orchestrator | 2026-03-30 01:24:52.101766 | orchestrator | # Cinder 2026-03-30 01:24:52.101776 | orchestrator | 2026-03-30 01:24:52.101783 | orchestrator | + echo 2026-03-30 01:24:52.101789 | orchestrator | + echo '# Cinder' 2026-03-30 01:24:52.101796 | orchestrator | + echo 2026-03-30 01:24:52.101802 | orchestrator | + openstack volume service list 2026-03-30 01:24:55.000541 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-30 01:24:55.000644 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-30 01:24:55.000654 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-30 01:24:55.000662 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-30T01:24:49.000000 | 2026-03-30 01:24:55.000760 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-30T01:24:49.000000 | 2026-03-30 01:24:55.000770 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-30T01:24:50.000000 | 2026-03-30 01:24:55.000777 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-30T01:24:49.000000 | 2026-03-30 01:24:55.000784 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-30T01:24:45.000000 | 2026-03-30 01:24:55.000791 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-30T01:24:48.000000 | 2026-03-30 01:24:55.000799 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-30T01:24:52.000000 | 2026-03-30 01:24:55.000806 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-30T01:24:45.000000 | 2026-03-30 01:24:55.000812 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-30T01:24:45.000000 | 2026-03-30 01:24:55.000820 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-30 01:24:55.323854 | orchestrator | 2026-03-30 01:24:55.323998 | orchestrator | # Neutron 2026-03-30 01:24:55.324008 | orchestrator | 2026-03-30 01:24:55.324012 | orchestrator | + echo 2026-03-30 01:24:55.324017 | orchestrator | + echo '# Neutron' 2026-03-30 01:24:55.324021 | orchestrator | + echo 2026-03-30 01:24:55.324028 | orchestrator | + openstack network agent list 2026-03-30 01:24:57.949582 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-30 01:24:57.949636 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-30 01:24:57.949643 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-30 01:24:57.949666 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-30 01:24:57.949672 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-30 01:24:57.949678 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-30 01:24:57.949683 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-30 01:24:57.949688 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-30 01:24:57.949693 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-30 01:24:57.949698 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-30 01:24:57.949703 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-30 01:24:57.949708 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-30 01:24:57.949714 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-30 01:24:58.240756 | orchestrator | + openstack network service provider list 2026-03-30 01:25:00.825665 | orchestrator | +---------------+------+---------+ 2026-03-30 01:25:00.825739 | orchestrator | | Service Type | Name | Default | 2026-03-30 01:25:00.825752 | orchestrator | +---------------+------+---------+ 2026-03-30 01:25:00.825761 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-30 01:25:00.825769 | orchestrator | +---------------+------+---------+ 2026-03-30 01:25:01.135443 | orchestrator | 2026-03-30 01:25:01.135494 | orchestrator | # Nova 2026-03-30 01:25:01.135503 | orchestrator | 2026-03-30 01:25:01.135510 | orchestrator | + echo 2026-03-30 01:25:01.135516 | orchestrator | + echo '# Nova' 2026-03-30 01:25:01.135523 | orchestrator | + echo 2026-03-30 01:25:01.135539 | orchestrator | + openstack compute service list 2026-03-30 01:25:03.963508 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-30 01:25:03.963604 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-30 01:25:03.963615 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-30 01:25:03.963623 | orchestrator | | 3ee35cb8-1cd9-406f-9a4b-2927c2335623 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-30T01:24:54.000000 | 2026-03-30 01:25:03.963630 | orchestrator | | 1132b07d-ee90-4137-997a-873bac769ad4 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-30T01:25:02.000000 | 2026-03-30 01:25:03.963638 | orchestrator | | 50ee2ada-c7e7-478c-a6f9-f67955a1e7c4 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-30T01:25:03.000000 | 2026-03-30 01:25:03.963645 | orchestrator | | bfb80ea2-5880-4d7b-9108-049697843263 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-30T01:25:03.000000 | 2026-03-30 01:25:03.963652 | orchestrator | | 2ba95ff4-55fa-4f66-8ef0-5d803253aa14 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-30T01:24:55.000000 | 2026-03-30 01:25:03.963660 | orchestrator | | 43a7876c-4af0-4cd0-a85b-53ec9ba792c4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-30T01:24:57.000000 | 2026-03-30 01:25:03.963668 | orchestrator | | 0046f725-cbd1-4d18-b5c3-abf8dee026c8 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-30T01:25:03.000000 | 2026-03-30 01:25:03.963697 | orchestrator | | 0048fefa-aac2-4ea7-bce5-6b1b2d65de47 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-30T01:25:03.000000 | 2026-03-30 01:25:03.963704 | orchestrator | | 2b2e6f56-ff44-4d7d-9e3d-c1a5c177466d | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-30T01:25:03.000000 | 2026-03-30 01:25:03.963711 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-30 01:25:04.204998 | orchestrator | + openstack hypervisor list 2026-03-30 01:25:06.752865 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-30 01:25:06.753008 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-30 01:25:06.753030 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-30 01:25:06.753045 | orchestrator | | f74e0735-71c4-4c6e-a001-7c3586f3c604 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-30 01:25:06.753062 | orchestrator | | c85142fb-ec77-4036-8d26-94ff296c0152 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-30 01:25:06.753076 | orchestrator | | 78205b8c-43fa-4bbd-bbe8-008b02b8b021 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-30 01:25:06.753092 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-30 01:25:07.033809 | orchestrator | 2026-03-30 01:25:07.033865 | orchestrator | # Run OpenStack test play 2026-03-30 01:25:07.033874 | orchestrator | 2026-03-30 01:25:07.033880 | orchestrator | + echo 2026-03-30 01:25:07.033886 | orchestrator | + echo '# Run OpenStack test play' 2026-03-30 01:25:07.033925 | orchestrator | + echo 2026-03-30 01:25:07.033932 | orchestrator | + osism apply --environment openstack test 2026-03-30 01:25:08.256551 | orchestrator | 2026-03-30 01:25:08 | INFO  | Trying to run play test in environment openstack 2026-03-30 01:25:08.285465 | orchestrator | 2026-03-30 01:25:08 | INFO  | Prepare task for execution of test. 2026-03-30 01:25:08.352176 | orchestrator | 2026-03-30 01:25:08 | INFO  | Task e11eb327-5c4b-44f3-91b8-faed60422d80 (test) was prepared for execution. 2026-03-30 01:25:08.352237 | orchestrator | 2026-03-30 01:25:08 | INFO  | It takes a moment until task e11eb327-5c4b-44f3-91b8-faed60422d80 (test) has been started and output is visible here. 2026-03-30 01:27:55.231076 | orchestrator | 2026-03-30 01:27:55.231149 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-30 01:27:55.231161 | orchestrator | 2026-03-30 01:27:55.231169 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-30 01:27:55.231176 | orchestrator | Monday 30 March 2026 01:25:11 +0000 (0:00:00.101) 0:00:00.101 ********** 2026-03-30 01:27:55.231184 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231191 | orchestrator | 2026-03-30 01:27:55.231198 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-30 01:27:55.231205 | orchestrator | Monday 30 March 2026 01:25:15 +0000 (0:00:03.841) 0:00:03.943 ********** 2026-03-30 01:27:55.231212 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231219 | orchestrator | 2026-03-30 01:27:55.231225 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-30 01:27:55.231231 | orchestrator | Monday 30 March 2026 01:25:19 +0000 (0:00:04.430) 0:00:08.373 ********** 2026-03-30 01:27:55.231238 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231244 | orchestrator | 2026-03-30 01:27:55.231251 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-30 01:27:55.231257 | orchestrator | Monday 30 March 2026 01:25:26 +0000 (0:00:06.849) 0:00:15.223 ********** 2026-03-30 01:27:55.231264 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231270 | orchestrator | 2026-03-30 01:27:55.231277 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-30 01:27:55.231284 | orchestrator | Monday 30 March 2026 01:25:30 +0000 (0:00:03.953) 0:00:19.176 ********** 2026-03-30 01:27:55.231306 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231312 | orchestrator | 2026-03-30 01:27:55.231319 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-30 01:27:55.231325 | orchestrator | Monday 30 March 2026 01:25:34 +0000 (0:00:04.128) 0:00:23.305 ********** 2026-03-30 01:27:55.231331 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-30 01:27:55.231338 | orchestrator | changed: [localhost] => (item=member) 2026-03-30 01:27:55.231345 | orchestrator | changed: [localhost] => (item=creator) 2026-03-30 01:27:55.231351 | orchestrator | 2026-03-30 01:27:55.231357 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-30 01:27:55.231364 | orchestrator | Monday 30 March 2026 01:25:46 +0000 (0:00:11.502) 0:00:34.808 ********** 2026-03-30 01:27:55.231370 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231377 | orchestrator | 2026-03-30 01:27:55.231383 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-30 01:27:55.231389 | orchestrator | Monday 30 March 2026 01:25:51 +0000 (0:00:04.952) 0:00:39.761 ********** 2026-03-30 01:27:55.231396 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231402 | orchestrator | 2026-03-30 01:27:55.231409 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-30 01:27:55.231415 | orchestrator | Monday 30 March 2026 01:25:55 +0000 (0:00:04.654) 0:00:44.415 ********** 2026-03-30 01:27:55.231421 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231428 | orchestrator | 2026-03-30 01:27:55.231434 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-30 01:27:55.231441 | orchestrator | Monday 30 March 2026 01:26:00 +0000 (0:00:04.184) 0:00:48.600 ********** 2026-03-30 01:27:55.231447 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231454 | orchestrator | 2026-03-30 01:27:55.231460 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-30 01:27:55.231467 | orchestrator | Monday 30 March 2026 01:26:04 +0000 (0:00:04.241) 0:00:52.842 ********** 2026-03-30 01:27:55.231474 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231480 | orchestrator | 2026-03-30 01:27:55.231487 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-30 01:27:55.231494 | orchestrator | Monday 30 March 2026 01:26:08 +0000 (0:00:04.234) 0:00:57.076 ********** 2026-03-30 01:27:55.231500 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231507 | orchestrator | 2026-03-30 01:27:55.231513 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-30 01:27:55.231520 | orchestrator | Monday 30 March 2026 01:26:12 +0000 (0:00:04.213) 0:01:01.290 ********** 2026-03-30 01:27:55.231526 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231532 | orchestrator | 2026-03-30 01:27:55.231539 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-30 01:27:55.231545 | orchestrator | Monday 30 March 2026 01:26:18 +0000 (0:00:05.289) 0:01:06.579 ********** 2026-03-30 01:27:55.231552 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231558 | orchestrator | 2026-03-30 01:27:55.231565 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-30 01:27:55.231571 | orchestrator | Monday 30 March 2026 01:26:23 +0000 (0:00:05.855) 0:01:12.435 ********** 2026-03-30 01:27:55.231578 | orchestrator | changed: [localhost] 2026-03-30 01:27:55.231584 | orchestrator | 2026-03-30 01:27:55.231590 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-30 01:27:55.231597 | orchestrator | 2026-03-30 01:27:55.231604 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-30 01:27:55.231610 | orchestrator | Monday 30 March 2026 01:26:34 +0000 (0:00:10.872) 0:01:23.307 ********** 2026-03-30 01:27:55.231617 | orchestrator | ok: [localhost] 2026-03-30 01:27:55.231624 | orchestrator | 2026-03-30 01:27:55.231631 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-30 01:27:55.231649 | orchestrator | Monday 30 March 2026 01:26:38 +0000 (0:00:03.676) 0:01:26.984 ********** 2026-03-30 01:27:55.231662 | orchestrator | skipping: [localhost] 2026-03-30 01:27:55.231669 | orchestrator | 2026-03-30 01:27:55.231676 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-30 01:27:55.231683 | orchestrator | Monday 30 March 2026 01:26:38 +0000 (0:00:00.066) 0:01:27.050 ********** 2026-03-30 01:27:55.231690 | orchestrator | skipping: [localhost] 2026-03-30 01:27:55.231697 | orchestrator | 2026-03-30 01:27:55.231703 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-30 01:27:55.231710 | orchestrator | Monday 30 March 2026 01:26:38 +0000 (0:00:00.098) 0:01:27.149 ********** 2026-03-30 01:27:55.231718 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-30 01:27:55.231725 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-30 01:27:55.231744 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-30 01:27:55.231752 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-30 01:27:55.231759 | orchestrator | skipping: [localhost] => (item=test)  2026-03-30 01:27:55.231766 | orchestrator | skipping: [localhost] 2026-03-30 01:27:55.231773 | orchestrator | 2026-03-30 01:27:55.231780 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-30 01:27:55.231787 | orchestrator | Monday 30 March 2026 01:26:38 +0000 (0:00:00.148) 0:01:27.297 ********** 2026-03-30 01:27:55.231798 | orchestrator | skipping: [localhost] 2026-03-30 01:27:55.231804 | orchestrator | 2026-03-30 01:27:55.231812 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-30 01:27:55.231819 | orchestrator | Monday 30 March 2026 01:26:38 +0000 (0:00:00.127) 0:01:27.424 ********** 2026-03-30 01:27:55.231825 | orchestrator | changed: [localhost] => (item=test) 2026-03-30 01:27:55.231868 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-30 01:27:55.231877 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-30 01:27:55.231885 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-30 01:27:55.231893 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-30 01:27:55.231901 | orchestrator | 2026-03-30 01:27:55.231908 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-30 01:27:55.231917 | orchestrator | Monday 30 March 2026 01:26:43 +0000 (0:00:04.546) 0:01:31.971 ********** 2026-03-30 01:27:55.231931 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-30 01:27:55.231940 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-30 01:27:55.231947 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-30 01:27:55.231956 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-30 01:27:55.231963 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-03-30 01:27:55.231972 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j298525737315.2819', 'results_file': '/ansible/.ansible_async/j298525737315.2819', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.231982 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j836215160381.2844', 'results_file': '/ansible/.ansible_async/j836215160381.2844', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.231990 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j292003158627.2869', 'results_file': '/ansible/.ansible_async/j292003158627.2869', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.231999 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j957746006715.2901', 'results_file': '/ansible/.ansible_async/j957746006715.2901', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.232006 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j239932080932.2926', 'results_file': '/ansible/.ansible_async/j239932080932.2926', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.232020 | orchestrator | 2026-03-30 01:27:55.232027 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-30 01:27:55.232033 | orchestrator | Monday 30 March 2026 01:27:41 +0000 (0:00:57.999) 0:02:29.970 ********** 2026-03-30 01:27:55.232040 | orchestrator | changed: [localhost] => (item=test) 2026-03-30 01:27:55.232046 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-30 01:27:55.232052 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-30 01:27:55.232059 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-30 01:27:55.232065 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-30 01:27:55.232071 | orchestrator | 2026-03-30 01:27:55.232076 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-30 01:27:55.232082 | orchestrator | Monday 30 March 2026 01:27:46 +0000 (0:00:04.666) 0:02:34.637 ********** 2026-03-30 01:27:55.232089 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-30 01:27:55.232095 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j737151845124.3037', 'results_file': '/ansible/.ansible_async/j737151845124.3037', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.232101 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j17706107619.3062', 'results_file': '/ansible/.ansible_async/j17706107619.3062', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.232107 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j690809367743.3087', 'results_file': '/ansible/.ansible_async/j690809367743.3087', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-30 01:27:55.232120 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j655681589118.3112', 'results_file': '/ansible/.ansible_async/j655681589118.3112', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.700865 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j938198526693.3137', 'results_file': '/ansible/.ansible_async/j938198526693.3137', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.700985 | orchestrator | 2026-03-30 01:28:35.701005 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-30 01:28:35.701021 | orchestrator | Monday 30 March 2026 01:27:56 +0000 (0:00:09.971) 0:02:44.609 ********** 2026-03-30 01:28:35.701036 | orchestrator | changed: [localhost] => (item=test) 2026-03-30 01:28:35.701051 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-30 01:28:35.701066 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-30 01:28:35.701080 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-30 01:28:35.701094 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-30 01:28:35.701106 | orchestrator | 2026-03-30 01:28:35.701135 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-30 01:28:35.701150 | orchestrator | Monday 30 March 2026 01:28:00 +0000 (0:00:04.682) 0:02:49.292 ********** 2026-03-30 01:28:35.701164 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-30 01:28:35.701180 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j702009963268.3206', 'results_file': '/ansible/.ansible_async/j702009963268.3206', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.701196 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j6694134753.3231', 'results_file': '/ansible/.ansible_async/j6694134753.3231', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.701230 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j4949157859.3257', 'results_file': '/ansible/.ansible_async/j4949157859.3257', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.701246 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j667762306875.3283', 'results_file': '/ansible/.ansible_async/j667762306875.3283', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.701260 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j266463722821.3309', 'results_file': '/ansible/.ansible_async/j266463722821.3309', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-30 01:28:35.701274 | orchestrator | 2026-03-30 01:28:35.701289 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-30 01:28:35.701303 | orchestrator | Monday 30 March 2026 01:28:10 +0000 (0:00:10.141) 0:02:59.434 ********** 2026-03-30 01:28:35.701316 | orchestrator | changed: [localhost] 2026-03-30 01:28:35.701331 | orchestrator | 2026-03-30 01:28:35.701347 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-30 01:28:35.701362 | orchestrator | Monday 30 March 2026 01:28:17 +0000 (0:00:06.960) 0:03:06.394 ********** 2026-03-30 01:28:35.701378 | orchestrator | changed: [localhost] 2026-03-30 01:28:35.701394 | orchestrator | 2026-03-30 01:28:35.701410 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-30 01:28:35.701426 | orchestrator | Monday 30 March 2026 01:28:30 +0000 (0:00:12.988) 0:03:19.382 ********** 2026-03-30 01:28:35.701442 | orchestrator | ok: [localhost] 2026-03-30 01:28:35.701459 | orchestrator | 2026-03-30 01:28:35.701476 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-30 01:28:35.701492 | orchestrator | Monday 30 March 2026 01:28:35 +0000 (0:00:04.628) 0:03:24.011 ********** 2026-03-30 01:28:35.701508 | orchestrator | ok: [localhost] => { 2026-03-30 01:28:35.701525 | orchestrator |  "msg": "192.168.112.162" 2026-03-30 01:28:35.701542 | orchestrator | } 2026-03-30 01:28:35.701559 | orchestrator | 2026-03-30 01:28:35.701575 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:28:35.701592 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-30 01:28:35.701608 | orchestrator | 2026-03-30 01:28:35.701623 | orchestrator | 2026-03-30 01:28:35.701639 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:28:35.701655 | orchestrator | Monday 30 March 2026 01:28:35 +0000 (0:00:00.053) 0:03:24.064 ********** 2026-03-30 01:28:35.701670 | orchestrator | =============================================================================== 2026-03-30 01:28:35.701686 | orchestrator | Wait for instance creation to complete --------------------------------- 58.00s 2026-03-30 01:28:35.701700 | orchestrator | Attach test volume ----------------------------------------------------- 12.99s 2026-03-30 01:28:35.701714 | orchestrator | Add member roles to user test ------------------------------------------ 11.50s 2026-03-30 01:28:35.701729 | orchestrator | Create test router ----------------------------------------------------- 10.87s 2026-03-30 01:28:35.701744 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.14s 2026-03-30 01:28:35.701758 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.97s 2026-03-30 01:28:35.701773 | orchestrator | Create test volume ------------------------------------------------------ 6.96s 2026-03-30 01:28:35.701832 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.85s 2026-03-30 01:28:35.701849 | orchestrator | Create test subnet ------------------------------------------------------ 5.85s 2026-03-30 01:28:35.701863 | orchestrator | Create test network ----------------------------------------------------- 5.29s 2026-03-30 01:28:35.701889 | orchestrator | Create test server group ------------------------------------------------ 4.95s 2026-03-30 01:28:35.701905 | orchestrator | Add tag to instances ---------------------------------------------------- 4.68s 2026-03-30 01:28:35.701920 | orchestrator | Add metadata to instances ----------------------------------------------- 4.67s 2026-03-30 01:28:35.701934 | orchestrator | Create ssh security group ----------------------------------------------- 4.65s 2026-03-30 01:28:35.701949 | orchestrator | Create floating ip address ---------------------------------------------- 4.63s 2026-03-30 01:28:35.701961 | orchestrator | Create test instances --------------------------------------------------- 4.55s 2026-03-30 01:28:35.701974 | orchestrator | Create test-admin user -------------------------------------------------- 4.43s 2026-03-30 01:28:35.701992 | orchestrator | Create icmp security group ---------------------------------------------- 4.24s 2026-03-30 01:28:35.702005 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.23s 2026-03-30 01:28:35.702070 | orchestrator | Create test keypair ----------------------------------------------------- 4.21s 2026-03-30 01:28:35.900567 | orchestrator | + server_list 2026-03-30 01:28:35.900616 | orchestrator | + openstack --os-cloud test server list 2026-03-30 01:28:39.305064 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-30 01:28:39.305158 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-30 01:28:39.305169 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-30 01:28:39.305176 | orchestrator | | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | test-4 | ACTIVE | test=192.168.112.134, 192.168.200.141 | N/A (booted from volume) | SCS-1L-1 | 2026-03-30 01:28:39.305183 | orchestrator | | 18c36eb5-ffb2-42f3-934d-dd9017616912 | test-3 | ACTIVE | test=192.168.112.195, 192.168.200.88 | N/A (booted from volume) | SCS-1L-1 | 2026-03-30 01:28:39.305188 | orchestrator | | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | test-1 | ACTIVE | test=192.168.112.168, 192.168.200.228 | N/A (booted from volume) | SCS-1L-1 | 2026-03-30 01:28:39.305195 | orchestrator | | 488a0aa7-3295-4aaa-8881-32fc3a740872 | test-2 | ACTIVE | test=192.168.112.179, 192.168.200.108 | N/A (booted from volume) | SCS-1L-1 | 2026-03-30 01:28:39.305201 | orchestrator | | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | test | ACTIVE | test=192.168.112.162, 192.168.200.22 | N/A (booted from volume) | SCS-1L-1 | 2026-03-30 01:28:39.305208 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-30 01:28:39.556298 | orchestrator | + openstack --os-cloud test server show test 2026-03-30 01:28:42.940181 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:42.940283 | orchestrator | | Field | Value | 2026-03-30 01:28:42.940292 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:42.940314 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-30 01:28:42.940321 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-30 01:28:42.940327 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-30 01:28:42.940337 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-30 01:28:42.940344 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-30 01:28:42.940350 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-30 01:28:42.940369 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-30 01:28:42.940376 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-30 01:28:42.940383 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-30 01:28:42.940390 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-30 01:28:42.940400 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-30 01:28:42.940407 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-30 01:28:42.940414 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-30 01:28:42.940420 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-30 01:28:42.940427 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-30 01:28:42.940434 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-30T01:27:17.000000 | 2026-03-30 01:28:42.940444 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-30 01:28:42.940451 | orchestrator | | accessIPv4 | | 2026-03-30 01:28:42.940458 | orchestrator | | accessIPv6 | | 2026-03-30 01:28:42.940476 | orchestrator | | addresses | test=192.168.112.162, 192.168.200.22 | 2026-03-30 01:28:42.940483 | orchestrator | | config_drive | | 2026-03-30 01:28:42.940489 | orchestrator | | created | 2026-03-30T01:26:47Z | 2026-03-30 01:28:42.940499 | orchestrator | | description | None | 2026-03-30 01:28:42.940505 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-30 01:28:42.940512 | orchestrator | | hostId | 684aa4e7ed9039ea24178b72ce8695876991364a1e104417c66618c7 | 2026-03-30 01:28:42.940518 | orchestrator | | host_status | None | 2026-03-30 01:28:42.940529 | orchestrator | | id | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | 2026-03-30 01:28:42.940536 | orchestrator | | image | N/A (booted from volume) | 2026-03-30 01:28:42.940547 | orchestrator | | key_name | test | 2026-03-30 01:28:42.940553 | orchestrator | | locked | False | 2026-03-30 01:28:42.940560 | orchestrator | | locked_reason | None | 2026-03-30 01:28:42.940566 | orchestrator | | name | test | 2026-03-30 01:28:42.940575 | orchestrator | | pinned_availability_zone | None | 2026-03-30 01:28:42.940582 | orchestrator | | progress | 0 | 2026-03-30 01:28:42.940587 | orchestrator | | project_id | ca3eaeb21db74e2d81881cf4b6a9ef29 | 2026-03-30 01:28:42.940593 | orchestrator | | properties | hostname='test' | 2026-03-30 01:28:42.940603 | orchestrator | | security_groups | name='icmp' | 2026-03-30 01:28:42.940613 | orchestrator | | | name='ssh' | 2026-03-30 01:28:42.940620 | orchestrator | | server_groups | None | 2026-03-30 01:28:42.940626 | orchestrator | | status | ACTIVE | 2026-03-30 01:28:42.940632 | orchestrator | | tags | test | 2026-03-30 01:28:42.940637 | orchestrator | | trusted_image_certificates | None | 2026-03-30 01:28:42.940646 | orchestrator | | updated | 2026-03-30T01:27:47Z | 2026-03-30 01:28:42.940652 | orchestrator | | user_id | a475b38df342484b994a7ab4301cf929 | 2026-03-30 01:28:42.940659 | orchestrator | | volumes_attached | delete_on_termination='True', id='91ba3195-91e6-40c3-b310-62992230d529' | 2026-03-30 01:28:42.940666 | orchestrator | | | delete_on_termination='False', id='e065f289-d7de-4ffd-a667-88735f8a41aa' | 2026-03-30 01:28:42.944262 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:43.233763 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-30 01:28:46.223559 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:46.223634 | orchestrator | | Field | Value | 2026-03-30 01:28:46.223641 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:46.223646 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-30 01:28:46.223650 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-30 01:28:46.223666 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-30 01:28:46.223670 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-30 01:28:46.223674 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-30 01:28:46.223678 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-30 01:28:46.223715 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-30 01:28:46.223721 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-30 01:28:46.223725 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-30 01:28:46.223729 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-30 01:28:46.223733 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-30 01:28:46.223739 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-30 01:28:46.223750 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-30 01:28:46.223756 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-30 01:28:46.223762 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-30 01:28:46.223774 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-30T01:27:15.000000 | 2026-03-30 01:28:46.223784 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-30 01:28:46.223866 | orchestrator | | accessIPv4 | | 2026-03-30 01:28:46.223873 | orchestrator | | accessIPv6 | | 2026-03-30 01:28:46.223879 | orchestrator | | addresses | test=192.168.112.168, 192.168.200.228 | 2026-03-30 01:28:46.223885 | orchestrator | | config_drive | | 2026-03-30 01:28:46.223891 | orchestrator | | created | 2026-03-30T01:26:48Z | 2026-03-30 01:28:46.223898 | orchestrator | | description | None | 2026-03-30 01:28:46.223904 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-30 01:28:46.223919 | orchestrator | | hostId | 684aa4e7ed9039ea24178b72ce8695876991364a1e104417c66618c7 | 2026-03-30 01:28:46.223930 | orchestrator | | host_status | None | 2026-03-30 01:28:46.223942 | orchestrator | | id | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | 2026-03-30 01:28:46.223946 | orchestrator | | image | N/A (booted from volume) | 2026-03-30 01:28:46.223950 | orchestrator | | key_name | test | 2026-03-30 01:28:46.223954 | orchestrator | | locked | False | 2026-03-30 01:28:46.223958 | orchestrator | | locked_reason | None | 2026-03-30 01:28:46.223962 | orchestrator | | name | test-1 | 2026-03-30 01:28:46.223969 | orchestrator | | pinned_availability_zone | None | 2026-03-30 01:28:46.223973 | orchestrator | | progress | 0 | 2026-03-30 01:28:46.223980 | orchestrator | | project_id | ca3eaeb21db74e2d81881cf4b6a9ef29 | 2026-03-30 01:28:46.223984 | orchestrator | | properties | hostname='test-1' | 2026-03-30 01:28:46.223993 | orchestrator | | security_groups | name='icmp' | 2026-03-30 01:28:46.223997 | orchestrator | | | name='ssh' | 2026-03-30 01:28:46.224001 | orchestrator | | server_groups | None | 2026-03-30 01:28:46.224005 | orchestrator | | status | ACTIVE | 2026-03-30 01:28:46.224009 | orchestrator | | tags | test | 2026-03-30 01:28:46.224013 | orchestrator | | trusted_image_certificates | None | 2026-03-30 01:28:46.224020 | orchestrator | | updated | 2026-03-30T01:27:48Z | 2026-03-30 01:28:46.224028 | orchestrator | | user_id | a475b38df342484b994a7ab4301cf929 | 2026-03-30 01:28:46.224031 | orchestrator | | volumes_attached | delete_on_termination='True', id='0c4e1f9e-6884-41d3-90ce-a74bdb748bf8' | 2026-03-30 01:28:46.229006 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:46.518607 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-30 01:28:49.345935 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:49.346008 | orchestrator | | Field | Value | 2026-03-30 01:28:49.346062 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:49.346067 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-30 01:28:49.346072 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-30 01:28:49.346081 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-30 01:28:49.346098 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-30 01:28:49.346102 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-30 01:28:49.346106 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-30 01:28:49.346121 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-30 01:28:49.346125 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-30 01:28:49.346129 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-30 01:28:49.346133 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-30 01:28:49.346137 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-30 01:28:49.346141 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-30 01:28:49.346149 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-30 01:28:49.346153 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-30 01:28:49.346157 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-30 01:28:49.346161 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-30T01:27:16.000000 | 2026-03-30 01:28:49.346169 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-30 01:28:49.346173 | orchestrator | | accessIPv4 | | 2026-03-30 01:28:49.346177 | orchestrator | | accessIPv6 | | 2026-03-30 01:28:49.346181 | orchestrator | | addresses | test=192.168.112.179, 192.168.200.108 | 2026-03-30 01:28:49.346184 | orchestrator | | config_drive | | 2026-03-30 01:28:49.346191 | orchestrator | | created | 2026-03-30T01:26:48Z | 2026-03-30 01:28:49.346197 | orchestrator | | description | None | 2026-03-30 01:28:49.346201 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-30 01:28:49.346205 | orchestrator | | hostId | 684aa4e7ed9039ea24178b72ce8695876991364a1e104417c66618c7 | 2026-03-30 01:28:49.346209 | orchestrator | | host_status | None | 2026-03-30 01:28:49.346217 | orchestrator | | id | 488a0aa7-3295-4aaa-8881-32fc3a740872 | 2026-03-30 01:28:49.346220 | orchestrator | | image | N/A (booted from volume) | 2026-03-30 01:28:49.346224 | orchestrator | | key_name | test | 2026-03-30 01:28:49.346228 | orchestrator | | locked | False | 2026-03-30 01:28:49.346232 | orchestrator | | locked_reason | None | 2026-03-30 01:28:49.346238 | orchestrator | | name | test-2 | 2026-03-30 01:28:49.346242 | orchestrator | | pinned_availability_zone | None | 2026-03-30 01:28:49.346246 | orchestrator | | progress | 0 | 2026-03-30 01:28:49.346250 | orchestrator | | project_id | ca3eaeb21db74e2d81881cf4b6a9ef29 | 2026-03-30 01:28:49.346254 | orchestrator | | properties | hostname='test-2' | 2026-03-30 01:28:49.346270 | orchestrator | | security_groups | name='icmp' | 2026-03-30 01:28:49.346274 | orchestrator | | | name='ssh' | 2026-03-30 01:28:49.346278 | orchestrator | | server_groups | None | 2026-03-30 01:28:49.346285 | orchestrator | | status | ACTIVE | 2026-03-30 01:28:49.346292 | orchestrator | | tags | test | 2026-03-30 01:28:49.346296 | orchestrator | | trusted_image_certificates | None | 2026-03-30 01:28:49.346302 | orchestrator | | updated | 2026-03-30T01:27:49Z | 2026-03-30 01:28:49.346307 | orchestrator | | user_id | a475b38df342484b994a7ab4301cf929 | 2026-03-30 01:28:49.346310 | orchestrator | | volumes_attached | delete_on_termination='True', id='42bd3492-2507-4f67-aa45-665df3a82c58' | 2026-03-30 01:28:49.349843 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:49.599518 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-30 01:28:52.594263 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:52.594365 | orchestrator | | Field | Value | 2026-03-30 01:28:52.594376 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:52.594402 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-30 01:28:52.594408 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-30 01:28:52.594415 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-30 01:28:52.594435 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-30 01:28:52.594442 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-30 01:28:52.594448 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-30 01:28:52.594470 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-30 01:28:52.594479 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-30 01:28:52.594486 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-30 01:28:52.594498 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-30 01:28:52.594506 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-30 01:28:52.594512 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-30 01:28:52.594518 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-30 01:28:52.594530 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-30 01:28:52.594537 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-30 01:28:52.594543 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-30T01:27:17.000000 | 2026-03-30 01:28:52.594552 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-30 01:28:52.594559 | orchestrator | | accessIPv4 | | 2026-03-30 01:28:52.594570 | orchestrator | | accessIPv6 | | 2026-03-30 01:28:52.594575 | orchestrator | | addresses | test=192.168.112.195, 192.168.200.88 | 2026-03-30 01:28:52.594581 | orchestrator | | config_drive | | 2026-03-30 01:28:52.594587 | orchestrator | | created | 2026-03-30T01:26:50Z | 2026-03-30 01:28:52.594597 | orchestrator | | description | None | 2026-03-30 01:28:52.594603 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-30 01:28:52.594610 | orchestrator | | hostId | 684aa4e7ed9039ea24178b72ce8695876991364a1e104417c66618c7 | 2026-03-30 01:28:52.594616 | orchestrator | | host_status | None | 2026-03-30 01:28:52.594627 | orchestrator | | id | 18c36eb5-ffb2-42f3-934d-dd9017616912 | 2026-03-30 01:28:52.594633 | orchestrator | | image | N/A (booted from volume) | 2026-03-30 01:28:52.594644 | orchestrator | | key_name | test | 2026-03-30 01:28:52.594650 | orchestrator | | locked | False | 2026-03-30 01:28:52.594656 | orchestrator | | locked_reason | None | 2026-03-30 01:28:52.594661 | orchestrator | | name | test-3 | 2026-03-30 01:28:52.594671 | orchestrator | | pinned_availability_zone | None | 2026-03-30 01:28:52.594678 | orchestrator | | progress | 0 | 2026-03-30 01:28:52.594684 | orchestrator | | project_id | ca3eaeb21db74e2d81881cf4b6a9ef29 | 2026-03-30 01:28:52.594690 | orchestrator | | properties | hostname='test-3' | 2026-03-30 01:28:52.594701 | orchestrator | | security_groups | name='icmp' | 2026-03-30 01:28:52.594714 | orchestrator | | | name='ssh' | 2026-03-30 01:28:52.594721 | orchestrator | | server_groups | None | 2026-03-30 01:28:52.594727 | orchestrator | | status | ACTIVE | 2026-03-30 01:28:52.594734 | orchestrator | | tags | test | 2026-03-30 01:28:52.594740 | orchestrator | | trusted_image_certificates | None | 2026-03-30 01:28:52.594750 | orchestrator | | updated | 2026-03-30T01:27:49Z | 2026-03-30 01:28:52.594756 | orchestrator | | user_id | a475b38df342484b994a7ab4301cf929 | 2026-03-30 01:28:52.594762 | orchestrator | | volumes_attached | delete_on_termination='True', id='061a9902-a6c2-4830-a406-3776fbc99045' | 2026-03-30 01:28:52.597880 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:52.866868 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-30 01:28:55.681250 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:55.681319 | orchestrator | | Field | Value | 2026-03-30 01:28:55.681328 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:55.681334 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-30 01:28:55.681339 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-30 01:28:55.681344 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-30 01:28:55.681349 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-30 01:28:55.681354 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-30 01:28:55.681359 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-30 01:28:55.681388 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-30 01:28:55.681394 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-30 01:28:55.681399 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-30 01:28:55.681404 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-30 01:28:55.681409 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-30 01:28:55.681414 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-30 01:28:55.681678 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-30 01:28:55.681686 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-30 01:28:55.681691 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-30 01:28:55.681702 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-30T01:27:15.000000 | 2026-03-30 01:28:55.681712 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-30 01:28:55.681717 | orchestrator | | accessIPv4 | | 2026-03-30 01:28:55.681722 | orchestrator | | accessIPv6 | | 2026-03-30 01:28:55.681727 | orchestrator | | addresses | test=192.168.112.134, 192.168.200.141 | 2026-03-30 01:28:55.681735 | orchestrator | | config_drive | | 2026-03-30 01:28:55.681740 | orchestrator | | created | 2026-03-30T01:26:51Z | 2026-03-30 01:28:55.681745 | orchestrator | | description | None | 2026-03-30 01:28:55.681750 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-30 01:28:55.681755 | orchestrator | | hostId | 5cbc900f88758f0238e9498eb35968e90083991572b6ae6cfcd048f0 | 2026-03-30 01:28:55.681763 | orchestrator | | host_status | None | 2026-03-30 01:28:55.681772 | orchestrator | | id | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | 2026-03-30 01:28:55.681777 | orchestrator | | image | N/A (booted from volume) | 2026-03-30 01:28:55.681782 | orchestrator | | key_name | test | 2026-03-30 01:28:55.681787 | orchestrator | | locked | False | 2026-03-30 01:28:55.681849 | orchestrator | | locked_reason | None | 2026-03-30 01:28:55.681860 | orchestrator | | name | test-4 | 2026-03-30 01:28:55.681869 | orchestrator | | pinned_availability_zone | None | 2026-03-30 01:28:55.681877 | orchestrator | | progress | 0 | 2026-03-30 01:28:55.681894 | orchestrator | | project_id | ca3eaeb21db74e2d81881cf4b6a9ef29 | 2026-03-30 01:28:55.681905 | orchestrator | | properties | hostname='test-4' | 2026-03-30 01:28:55.681920 | orchestrator | | security_groups | name='icmp' | 2026-03-30 01:28:55.681930 | orchestrator | | | name='ssh' | 2026-03-30 01:28:55.681939 | orchestrator | | server_groups | None | 2026-03-30 01:28:55.681948 | orchestrator | | status | ACTIVE | 2026-03-30 01:28:55.681961 | orchestrator | | tags | test | 2026-03-30 01:28:55.681970 | orchestrator | | trusted_image_certificates | None | 2026-03-30 01:28:55.681978 | orchestrator | | updated | 2026-03-30T01:27:50Z | 2026-03-30 01:28:55.681991 | orchestrator | | user_id | a475b38df342484b994a7ab4301cf929 | 2026-03-30 01:28:55.681999 | orchestrator | | volumes_attached | delete_on_termination='True', id='912eea19-4c11-42fd-833c-e33f40c9ae3f' | 2026-03-30 01:28:55.687598 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-30 01:28:55.969904 | orchestrator | + server_ping 2026-03-30 01:28:55.972575 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-30 01:28:55.972651 | orchestrator | ++ tr -d '\r' 2026-03-30 01:28:58.813631 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:28:58.813711 | orchestrator | + ping -c3 192.168.112.168 2026-03-30 01:28:58.826764 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-03-30 01:28:58.826903 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=5.02 ms 2026-03-30 01:28:59.824628 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.23 ms 2026-03-30 01:29:00.825907 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.88 ms 2026-03-30 01:29:00.825989 | orchestrator | 2026-03-30 01:29:00.826006 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-03-30 01:29:00.826050 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-30 01:29:00.826056 | orchestrator | rtt min/avg/max/mdev = 1.876/3.044/5.024/1.407 ms 2026-03-30 01:29:00.826575 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:00.826612 | orchestrator | + ping -c3 192.168.112.162 2026-03-30 01:29:00.837758 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-30 01:29:00.837891 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=6.51 ms 2026-03-30 01:29:01.835264 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.08 ms 2026-03-30 01:29:02.836111 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.76 ms 2026-03-30 01:29:02.836193 | orchestrator | 2026-03-30 01:29:02.836201 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-30 01:29:02.836209 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:29:02.836216 | orchestrator | rtt min/avg/max/mdev = 1.760/3.450/6.514/2.169 ms 2026-03-30 01:29:02.836670 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:02.836698 | orchestrator | + ping -c3 192.168.112.195 2026-03-30 01:29:02.850082 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-03-30 01:29:02.850157 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=8.69 ms 2026-03-30 01:29:03.846352 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.44 ms 2026-03-30 01:29:04.847932 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=1.53 ms 2026-03-30 01:29:04.847995 | orchestrator | 2026-03-30 01:29:04.848005 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-03-30 01:29:04.848014 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-30 01:29:04.848050 | orchestrator | rtt min/avg/max/mdev = 1.528/4.218/8.685/3.180 ms 2026-03-30 01:29:04.848649 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:04.848675 | orchestrator | + ping -c3 192.168.112.179 2026-03-30 01:29:04.859244 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-30 01:29:04.859295 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.36 ms 2026-03-30 01:29:05.856167 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.58 ms 2026-03-30 01:29:06.857033 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.20 ms 2026-03-30 01:29:06.857101 | orchestrator | 2026-03-30 01:29:06.857110 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-30 01:29:06.857117 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:29:06.857123 | orchestrator | rtt min/avg/max/mdev = 1.197/3.044/6.362/2.350 ms 2026-03-30 01:29:06.857952 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:06.857969 | orchestrator | + ping -c3 192.168.112.134 2026-03-30 01:29:06.864272 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-30 01:29:06.864321 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=3.22 ms 2026-03-30 01:29:07.864330 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.54 ms 2026-03-30 01:29:08.866967 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-30 01:29:08.867063 | orchestrator | 2026-03-30 01:29:08.867073 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-30 01:29:08.867081 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-30 01:29:08.867088 | orchestrator | rtt min/avg/max/mdev = 1.540/2.232/3.224/0.719 ms 2026-03-30 01:29:08.867991 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-30 01:29:08.868038 | orchestrator | + compute_list 2026-03-30 01:29:08.868045 | orchestrator | + osism manage compute list testbed-node-3 2026-03-30 01:29:10.472289 | orchestrator | 2026-03-30 01:29:10 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:10.472377 | orchestrator | 2026-03-30 01:29:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:10.472390 | orchestrator | 2026-03-30 01:29:10 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:14.121945 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:14.122091 | orchestrator | | ID | Name | Status | 2026-03-30 01:29:14.122103 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:29:14.122107 | orchestrator | | 18c36eb5-ffb2-42f3-934d-dd9017616912 | test-3 | ACTIVE | 2026-03-30 01:29:14.122111 | orchestrator | | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | test-1 | ACTIVE | 2026-03-30 01:29:14.122116 | orchestrator | | 488a0aa7-3295-4aaa-8881-32fc3a740872 | test-2 | ACTIVE | 2026-03-30 01:29:14.122121 | orchestrator | | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | test | ACTIVE | 2026-03-30 01:29:14.122125 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:14.422681 | orchestrator | + osism manage compute list testbed-node-4 2026-03-30 01:29:15.932202 | orchestrator | 2026-03-30 01:29:15 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:15.932274 | orchestrator | 2026-03-30 01:29:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:15.932283 | orchestrator | 2026-03-30 01:29:15 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:17.115532 | orchestrator | +------+--------+----------+ 2026-03-30 01:29:17.115613 | orchestrator | | ID | Name | Status | 2026-03-30 01:29:17.115619 | orchestrator | |------+--------+----------| 2026-03-30 01:29:17.115623 | orchestrator | +------+--------+----------+ 2026-03-30 01:29:17.392700 | orchestrator | + osism manage compute list testbed-node-5 2026-03-30 01:29:18.997190 | orchestrator | 2026-03-30 01:29:18 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:18.997253 | orchestrator | 2026-03-30 01:29:18 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:18.997260 | orchestrator | 2026-03-30 01:29:18 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:20.621749 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:20.621905 | orchestrator | | ID | Name | Status | 2026-03-30 01:29:20.621913 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:29:20.621917 | orchestrator | | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | test-4 | ACTIVE | 2026-03-30 01:29:20.621922 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:20.898899 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-30 01:29:22.493766 | orchestrator | 2026-03-30 01:29:22 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:22.493892 | orchestrator | 2026-03-30 01:29:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:22.493903 | orchestrator | 2026-03-30 01:29:22 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:23.606569 | orchestrator | 2026-03-30 01:29:23 | INFO  | No migratable instances found on node testbed-node-4 2026-03-30 01:29:23.981853 | orchestrator | + compute_list 2026-03-30 01:29:23.981958 | orchestrator | + osism manage compute list testbed-node-3 2026-03-30 01:29:25.604451 | orchestrator | 2026-03-30 01:29:25 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:25.604541 | orchestrator | 2026-03-30 01:29:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:25.605041 | orchestrator | 2026-03-30 01:29:25 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:27.614269 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:27.614354 | orchestrator | | ID | Name | Status | 2026-03-30 01:29:27.614363 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:29:27.614371 | orchestrator | | 18c36eb5-ffb2-42f3-934d-dd9017616912 | test-3 | ACTIVE | 2026-03-30 01:29:27.614377 | orchestrator | | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | test-1 | ACTIVE | 2026-03-30 01:29:27.614384 | orchestrator | | 488a0aa7-3295-4aaa-8881-32fc3a740872 | test-2 | ACTIVE | 2026-03-30 01:29:27.614390 | orchestrator | | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | test | ACTIVE | 2026-03-30 01:29:27.614398 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:27.883197 | orchestrator | + osism manage compute list testbed-node-4 2026-03-30 01:29:29.504923 | orchestrator | 2026-03-30 01:29:29 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:29.504995 | orchestrator | 2026-03-30 01:29:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:29.505003 | orchestrator | 2026-03-30 01:29:29 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:30.678190 | orchestrator | +------+--------+----------+ 2026-03-30 01:29:30.678270 | orchestrator | | ID | Name | Status | 2026-03-30 01:29:30.678275 | orchestrator | |------+--------+----------| 2026-03-30 01:29:30.678280 | orchestrator | +------+--------+----------+ 2026-03-30 01:29:30.980866 | orchestrator | + osism manage compute list testbed-node-5 2026-03-30 01:29:32.613744 | orchestrator | 2026-03-30 01:29:32 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:32.613847 | orchestrator | 2026-03-30 01:29:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:32.613855 | orchestrator | 2026-03-30 01:29:32 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:34.232278 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:34.232398 | orchestrator | | ID | Name | Status | 2026-03-30 01:29:34.232408 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:29:34.232414 | orchestrator | | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | test-4 | ACTIVE | 2026-03-30 01:29:34.232421 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:29:34.518896 | orchestrator | + server_ping 2026-03-30 01:29:34.520173 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-30 01:29:34.520550 | orchestrator | ++ tr -d '\r' 2026-03-30 01:29:37.350844 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:37.350926 | orchestrator | + ping -c3 192.168.112.168 2026-03-30 01:29:37.358487 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-03-30 01:29:37.358574 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=5.62 ms 2026-03-30 01:29:38.357126 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.00 ms 2026-03-30 01:29:39.358073 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.69 ms 2026-03-30 01:29:39.358203 | orchestrator | 2026-03-30 01:29:39.358217 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-03-30 01:29:39.358226 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:29:39.358233 | orchestrator | rtt min/avg/max/mdev = 1.689/3.103/5.622/1.785 ms 2026-03-30 01:29:39.358911 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:39.358952 | orchestrator | + ping -c3 192.168.112.162 2026-03-30 01:29:39.369597 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-30 01:29:39.369670 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=7.51 ms 2026-03-30 01:29:40.366302 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.33 ms 2026-03-30 01:29:41.368156 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.92 ms 2026-03-30 01:29:41.368269 | orchestrator | 2026-03-30 01:29:41.368276 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-30 01:29:41.368282 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:29:41.368286 | orchestrator | rtt min/avg/max/mdev = 1.921/3.919/7.505/2.540 ms 2026-03-30 01:29:41.368291 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:41.368296 | orchestrator | + ping -c3 192.168.112.195 2026-03-30 01:29:41.380162 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-03-30 01:29:41.380264 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=8.55 ms 2026-03-30 01:29:42.374660 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=1.46 ms 2026-03-30 01:29:43.376589 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=1.27 ms 2026-03-30 01:29:43.376652 | orchestrator | 2026-03-30 01:29:43.376662 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-03-30 01:29:43.376670 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:29:43.376675 | orchestrator | rtt min/avg/max/mdev = 1.272/3.760/8.550/3.387 ms 2026-03-30 01:29:43.377236 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:43.377251 | orchestrator | + ping -c3 192.168.112.179 2026-03-30 01:29:43.384540 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-30 01:29:43.384593 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.16 ms 2026-03-30 01:29:44.383548 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.59 ms 2026-03-30 01:29:45.385291 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.10 ms 2026-03-30 01:29:45.385348 | orchestrator | 2026-03-30 01:29:45.385357 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-30 01:29:45.385364 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-30 01:29:45.385370 | orchestrator | rtt min/avg/max/mdev = 1.103/2.283/4.158/1.340 ms 2026-03-30 01:29:45.385919 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:29:45.386056 | orchestrator | + ping -c3 192.168.112.134 2026-03-30 01:29:45.395270 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-30 01:29:45.395329 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=4.35 ms 2026-03-30 01:29:46.394142 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.45 ms 2026-03-30 01:29:47.395858 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.45 ms 2026-03-30 01:29:47.395930 | orchestrator | 2026-03-30 01:29:47.395943 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-30 01:29:47.395954 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:29:47.395964 | orchestrator | rtt min/avg/max/mdev = 1.448/2.416/4.350/1.367 ms 2026-03-30 01:29:47.396532 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-30 01:29:49.129814 | orchestrator | 2026-03-30 01:29:49 | ERROR  | Unable to get ansible vault password 2026-03-30 01:29:49.129900 | orchestrator | 2026-03-30 01:29:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:29:49.129912 | orchestrator | 2026-03-30 01:29:49 | ERROR  | Dropping encrypted entries 2026-03-30 01:29:50.975283 | orchestrator | 2026-03-30 01:29:50 | INFO  | Live migrating server dac33b73-ab4e-4af2-bd0c-52da790e5c25 2026-03-30 01:30:03.846833 | orchestrator | 2026-03-30 01:30:03 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:06.244147 | orchestrator | 2026-03-30 01:30:06 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:09.002305 | orchestrator | 2026-03-30 01:30:09 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:11.348432 | orchestrator | 2026-03-30 01:30:11 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:14.009013 | orchestrator | 2026-03-30 01:30:14 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:16.370261 | orchestrator | 2026-03-30 01:30:16 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:18.563142 | orchestrator | 2026-03-30 01:30:18 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:20.808994 | orchestrator | 2026-03-30 01:30:20 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:30:23.031872 | orchestrator | 2026-03-30 01:30:23 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) completed with status ACTIVE 2026-03-30 01:30:23.345472 | orchestrator | + compute_list 2026-03-30 01:30:23.345531 | orchestrator | + osism manage compute list testbed-node-3 2026-03-30 01:30:24.922528 | orchestrator | 2026-03-30 01:30:24 | ERROR  | Unable to get ansible vault password 2026-03-30 01:30:24.922608 | orchestrator | 2026-03-30 01:30:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:30:24.922618 | orchestrator | 2026-03-30 01:30:24 | ERROR  | Dropping encrypted entries 2026-03-30 01:30:26.430688 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:30:26.430815 | orchestrator | | ID | Name | Status | 2026-03-30 01:30:26.430824 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:30:26.430828 | orchestrator | | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | test-4 | ACTIVE | 2026-03-30 01:30:26.430832 | orchestrator | | 18c36eb5-ffb2-42f3-934d-dd9017616912 | test-3 | ACTIVE | 2026-03-30 01:30:26.430836 | orchestrator | | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | test-1 | ACTIVE | 2026-03-30 01:30:26.430861 | orchestrator | | 488a0aa7-3295-4aaa-8881-32fc3a740872 | test-2 | ACTIVE | 2026-03-30 01:30:26.430866 | orchestrator | | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | test | ACTIVE | 2026-03-30 01:30:26.430870 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:30:26.710180 | orchestrator | + osism manage compute list testbed-node-4 2026-03-30 01:30:28.247381 | orchestrator | 2026-03-30 01:30:28 | ERROR  | Unable to get ansible vault password 2026-03-30 01:30:28.247466 | orchestrator | 2026-03-30 01:30:28 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:30:28.247474 | orchestrator | 2026-03-30 01:30:28 | ERROR  | Dropping encrypted entries 2026-03-30 01:30:29.337404 | orchestrator | +------+--------+----------+ 2026-03-30 01:30:29.337476 | orchestrator | | ID | Name | Status | 2026-03-30 01:30:29.337482 | orchestrator | |------+--------+----------| 2026-03-30 01:30:29.337517 | orchestrator | +------+--------+----------+ 2026-03-30 01:30:29.687638 | orchestrator | + osism manage compute list testbed-node-5 2026-03-30 01:30:31.357512 | orchestrator | 2026-03-30 01:30:31 | ERROR  | Unable to get ansible vault password 2026-03-30 01:30:31.357584 | orchestrator | 2026-03-30 01:30:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:30:31.357591 | orchestrator | 2026-03-30 01:30:31 | ERROR  | Dropping encrypted entries 2026-03-30 01:30:32.556786 | orchestrator | +------+--------+----------+ 2026-03-30 01:30:32.556894 | orchestrator | | ID | Name | Status | 2026-03-30 01:30:32.556903 | orchestrator | |------+--------+----------| 2026-03-30 01:30:32.556907 | orchestrator | +------+--------+----------+ 2026-03-30 01:30:32.882568 | orchestrator | + server_ping 2026-03-30 01:30:32.882833 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-30 01:30:32.883251 | orchestrator | ++ tr -d '\r' 2026-03-30 01:30:35.492058 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:30:35.492124 | orchestrator | + ping -c3 192.168.112.168 2026-03-30 01:30:35.499072 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-03-30 01:30:35.499143 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=4.74 ms 2026-03-30 01:30:36.497974 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=1.44 ms 2026-03-30 01:30:37.500300 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.41 ms 2026-03-30 01:30:37.500407 | orchestrator | 2026-03-30 01:30:37.500421 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-03-30 01:30:37.500429 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-30 01:30:37.500436 | orchestrator | rtt min/avg/max/mdev = 1.408/2.530/4.738/1.561 ms 2026-03-30 01:30:37.500445 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:30:37.500452 | orchestrator | + ping -c3 192.168.112.162 2026-03-30 01:30:37.510553 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-30 01:30:37.510627 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=5.76 ms 2026-03-30 01:30:38.509150 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.21 ms 2026-03-30 01:30:39.509420 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.62 ms 2026-03-30 01:30:39.509526 | orchestrator | 2026-03-30 01:30:39.509533 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-30 01:30:39.509539 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:30:39.509544 | orchestrator | rtt min/avg/max/mdev = 1.616/3.197/5.761/1.829 ms 2026-03-30 01:30:39.510275 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:30:39.510302 | orchestrator | + ping -c3 192.168.112.195 2026-03-30 01:30:39.520835 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-03-30 01:30:39.520918 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=6.54 ms 2026-03-30 01:30:40.518094 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.13 ms 2026-03-30 01:30:41.519722 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.22 ms 2026-03-30 01:30:41.520001 | orchestrator | 2026-03-30 01:30:41.520021 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-03-30 01:30:41.520032 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-30 01:30:41.520040 | orchestrator | rtt min/avg/max/mdev = 2.133/3.629/6.539/2.057 ms 2026-03-30 01:30:41.520059 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:30:41.520068 | orchestrator | + ping -c3 192.168.112.179 2026-03-30 01:30:41.530714 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-30 01:30:41.530804 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.51 ms 2026-03-30 01:30:42.528129 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.32 ms 2026-03-30 01:30:43.530218 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.32 ms 2026-03-30 01:30:43.530289 | orchestrator | 2026-03-30 01:30:43.530297 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-30 01:30:43.530304 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:30:43.530310 | orchestrator | rtt min/avg/max/mdev = 2.321/3.717/6.509/1.974 ms 2026-03-30 01:30:43.530317 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:30:43.530323 | orchestrator | + ping -c3 192.168.112.134 2026-03-30 01:30:43.541449 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-30 01:30:43.541563 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=5.65 ms 2026-03-30 01:30:44.539662 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.12 ms 2026-03-30 01:30:45.540847 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.56 ms 2026-03-30 01:30:45.540930 | orchestrator | 2026-03-30 01:30:45.540938 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-30 01:30:45.540943 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:30:45.540948 | orchestrator | rtt min/avg/max/mdev = 1.558/3.109/5.646/1.808 ms 2026-03-30 01:30:45.540952 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-30 01:30:47.183988 | orchestrator | 2026-03-30 01:30:47 | ERROR  | Unable to get ansible vault password 2026-03-30 01:30:47.184082 | orchestrator | 2026-03-30 01:30:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:30:47.184095 | orchestrator | 2026-03-30 01:30:47 | ERROR  | Dropping encrypted entries 2026-03-30 01:30:48.785642 | orchestrator | 2026-03-30 01:30:48 | INFO  | Live migrating server dac33b73-ab4e-4af2-bd0c-52da790e5c25 2026-03-30 01:31:02.191811 | orchestrator | 2026-03-30 01:31:02 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:04.596871 | orchestrator | 2026-03-30 01:31:04 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:06.852777 | orchestrator | 2026-03-30 01:31:06 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:09.191738 | orchestrator | 2026-03-30 01:31:09 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:11.485432 | orchestrator | 2026-03-30 01:31:11 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:13.730563 | orchestrator | 2026-03-30 01:31:13 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:16.094958 | orchestrator | 2026-03-30 01:31:16 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:18.360408 | orchestrator | 2026-03-30 01:31:18 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:20.652432 | orchestrator | 2026-03-30 01:31:20 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:23.001292 | orchestrator | 2026-03-30 01:31:23 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:25.305400 | orchestrator | 2026-03-30 01:31:25 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:27.656343 | orchestrator | 2026-03-30 01:31:27 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:31:29.954621 | orchestrator | 2026-03-30 01:31:29 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) completed with status ACTIVE 2026-03-30 01:31:29.954763 | orchestrator | 2026-03-30 01:31:29 | INFO  | Live migrating server 18c36eb5-ffb2-42f3-934d-dd9017616912 2026-03-30 01:31:40.710294 | orchestrator | 2026-03-30 01:31:40 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:43.068281 | orchestrator | 2026-03-30 01:31:43 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:45.450207 | orchestrator | 2026-03-30 01:31:45 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:47.667958 | orchestrator | 2026-03-30 01:31:47 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:50.151845 | orchestrator | 2026-03-30 01:31:50 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:52.526374 | orchestrator | 2026-03-30 01:31:52 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:54.858474 | orchestrator | 2026-03-30 01:31:54 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:57.138004 | orchestrator | 2026-03-30 01:31:57 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:31:59.406940 | orchestrator | 2026-03-30 01:31:59 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) completed with status ACTIVE 2026-03-30 01:31:59.407046 | orchestrator | 2026-03-30 01:31:59 | INFO  | Live migrating server 1547377a-973c-4df3-8d29-4d6be7c4c5f3 2026-03-30 01:32:10.457408 | orchestrator | 2026-03-30 01:32:10 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:12.766186 | orchestrator | 2026-03-30 01:32:12 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:15.084361 | orchestrator | 2026-03-30 01:32:15 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:17.362067 | orchestrator | 2026-03-30 01:32:17 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:19.731916 | orchestrator | 2026-03-30 01:32:19 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:22.227208 | orchestrator | 2026-03-30 01:32:22 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:24.485210 | orchestrator | 2026-03-30 01:32:24 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:26.756828 | orchestrator | 2026-03-30 01:32:26 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:29.207807 | orchestrator | 2026-03-30 01:32:29 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:32:31.707013 | orchestrator | 2026-03-30 01:32:31 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) completed with status ACTIVE 2026-03-30 01:32:31.707114 | orchestrator | 2026-03-30 01:32:31 | INFO  | Live migrating server 488a0aa7-3295-4aaa-8881-32fc3a740872 2026-03-30 01:32:43.345458 | orchestrator | 2026-03-30 01:32:43 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:45.719193 | orchestrator | 2026-03-30 01:32:45 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:48.048773 | orchestrator | 2026-03-30 01:32:48 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:50.388346 | orchestrator | 2026-03-30 01:32:50 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:52.661501 | orchestrator | 2026-03-30 01:32:52 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:54.959714 | orchestrator | 2026-03-30 01:32:54 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:57.313572 | orchestrator | 2026-03-30 01:32:57 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:32:59.615858 | orchestrator | 2026-03-30 01:32:59 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:33:01.920149 | orchestrator | 2026-03-30 01:33:01 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) completed with status ACTIVE 2026-03-30 01:33:01.920256 | orchestrator | 2026-03-30 01:33:01 | INFO  | Live migrating server 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a 2026-03-30 01:33:13.815766 | orchestrator | 2026-03-30 01:33:13 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:16.175154 | orchestrator | 2026-03-30 01:33:16 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:18.560306 | orchestrator | 2026-03-30 01:33:18 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:20.919758 | orchestrator | 2026-03-30 01:33:20 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:23.243819 | orchestrator | 2026-03-30 01:33:23 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:25.537768 | orchestrator | 2026-03-30 01:33:25 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:27.783249 | orchestrator | 2026-03-30 01:33:27 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:30.128768 | orchestrator | 2026-03-30 01:33:30 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:32.425238 | orchestrator | 2026-03-30 01:33:32 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:34.743503 | orchestrator | 2026-03-30 01:33:34 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:33:37.079273 | orchestrator | 2026-03-30 01:33:37 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) completed with status ACTIVE 2026-03-30 01:33:37.384290 | orchestrator | + compute_list 2026-03-30 01:33:37.384359 | orchestrator | + osism manage compute list testbed-node-3 2026-03-30 01:33:38.902308 | orchestrator | 2026-03-30 01:33:38 | ERROR  | Unable to get ansible vault password 2026-03-30 01:33:38.902380 | orchestrator | 2026-03-30 01:33:38 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:33:38.902399 | orchestrator | 2026-03-30 01:33:38 | ERROR  | Dropping encrypted entries 2026-03-30 01:33:40.021177 | orchestrator | +------+--------+----------+ 2026-03-30 01:33:40.021253 | orchestrator | | ID | Name | Status | 2026-03-30 01:33:40.021260 | orchestrator | |------+--------+----------| 2026-03-30 01:33:40.021264 | orchestrator | +------+--------+----------+ 2026-03-30 01:33:40.341871 | orchestrator | + osism manage compute list testbed-node-4 2026-03-30 01:33:41.886391 | orchestrator | 2026-03-30 01:33:41 | ERROR  | Unable to get ansible vault password 2026-03-30 01:33:41.886947 | orchestrator | 2026-03-30 01:33:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:33:41.886976 | orchestrator | 2026-03-30 01:33:41 | ERROR  | Dropping encrypted entries 2026-03-30 01:33:43.483829 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:33:43.483930 | orchestrator | | ID | Name | Status | 2026-03-30 01:33:43.483944 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:33:43.483951 | orchestrator | | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | test-4 | ACTIVE | 2026-03-30 01:33:43.483969 | orchestrator | | 18c36eb5-ffb2-42f3-934d-dd9017616912 | test-3 | ACTIVE | 2026-03-30 01:33:43.483978 | orchestrator | | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | test-1 | ACTIVE | 2026-03-30 01:33:43.483988 | orchestrator | | 488a0aa7-3295-4aaa-8881-32fc3a740872 | test-2 | ACTIVE | 2026-03-30 01:33:43.483996 | orchestrator | | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | test | ACTIVE | 2026-03-30 01:33:43.484004 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:33:43.762522 | orchestrator | + osism manage compute list testbed-node-5 2026-03-30 01:33:45.328650 | orchestrator | 2026-03-30 01:33:45 | ERROR  | Unable to get ansible vault password 2026-03-30 01:33:45.328724 | orchestrator | 2026-03-30 01:33:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:33:45.328734 | orchestrator | 2026-03-30 01:33:45 | ERROR  | Dropping encrypted entries 2026-03-30 01:33:46.389321 | orchestrator | +------+--------+----------+ 2026-03-30 01:33:46.389393 | orchestrator | | ID | Name | Status | 2026-03-30 01:33:46.389399 | orchestrator | |------+--------+----------| 2026-03-30 01:33:46.389403 | orchestrator | +------+--------+----------+ 2026-03-30 01:33:46.659242 | orchestrator | + server_ping 2026-03-30 01:33:46.660136 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-30 01:33:46.660990 | orchestrator | ++ tr -d '\r' 2026-03-30 01:33:49.090458 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:33:49.090647 | orchestrator | + ping -c3 192.168.112.168 2026-03-30 01:33:49.097179 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-03-30 01:33:49.097247 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=4.88 ms 2026-03-30 01:33:50.097012 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.23 ms 2026-03-30 01:33:51.098682 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.89 ms 2026-03-30 01:33:51.098796 | orchestrator | 2026-03-30 01:33:51.098818 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-03-30 01:33:51.098832 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:33:51.098841 | orchestrator | rtt min/avg/max/mdev = 1.888/2.996/4.875/1.335 ms 2026-03-30 01:33:51.098851 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:33:51.098892 | orchestrator | + ping -c3 192.168.112.162 2026-03-30 01:33:51.113616 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-30 01:33:51.113725 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=9.96 ms 2026-03-30 01:33:52.105557 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=1.85 ms 2026-03-30 01:33:53.107261 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.42 ms 2026-03-30 01:33:53.107334 | orchestrator | 2026-03-30 01:33:53.107340 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-30 01:33:53.107346 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-30 01:33:53.107350 | orchestrator | rtt min/avg/max/mdev = 1.417/4.410/9.964/3.931 ms 2026-03-30 01:33:53.107675 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:33:53.107687 | orchestrator | + ping -c3 192.168.112.195 2026-03-30 01:33:53.118684 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-03-30 01:33:53.118773 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=6.67 ms 2026-03-30 01:33:54.115884 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=1.89 ms 2026-03-30 01:33:55.117422 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.01 ms 2026-03-30 01:33:55.117507 | orchestrator | 2026-03-30 01:33:55.117519 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-03-30 01:33:55.117528 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:33:55.117536 | orchestrator | rtt min/avg/max/mdev = 1.890/3.524/6.670/2.225 ms 2026-03-30 01:33:55.118508 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:33:55.118550 | orchestrator | + ping -c3 192.168.112.179 2026-03-30 01:33:55.130603 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-30 01:33:55.130696 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.67 ms 2026-03-30 01:33:56.127534 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.40 ms 2026-03-30 01:33:57.128943 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.91 ms 2026-03-30 01:33:57.129053 | orchestrator | 2026-03-30 01:33:57.129077 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-30 01:33:57.129094 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-30 01:33:57.129109 | orchestrator | rtt min/avg/max/mdev = 1.909/3.991/7.666/2.606 ms 2026-03-30 01:33:57.129148 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:33:57.129163 | orchestrator | + ping -c3 192.168.112.134 2026-03-30 01:33:57.136941 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-30 01:33:57.137060 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=5.68 ms 2026-03-30 01:33:58.135803 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.38 ms 2026-03-30 01:33:59.136842 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.36 ms 2026-03-30 01:33:59.136922 | orchestrator | 2026-03-30 01:33:59.136931 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-30 01:33:59.136938 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-30 01:33:59.136944 | orchestrator | rtt min/avg/max/mdev = 1.357/3.136/5.677/1.843 ms 2026-03-30 01:33:59.136996 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-30 01:34:00.788220 | orchestrator | 2026-03-30 01:34:00 | ERROR  | Unable to get ansible vault password 2026-03-30 01:34:00.788308 | orchestrator | 2026-03-30 01:34:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:34:00.788320 | orchestrator | 2026-03-30 01:34:00 | ERROR  | Dropping encrypted entries 2026-03-30 01:34:02.374969 | orchestrator | 2026-03-30 01:34:02 | INFO  | Live migrating server dac33b73-ab4e-4af2-bd0c-52da790e5c25 2026-03-30 01:34:12.233662 | orchestrator | 2026-03-30 01:34:12 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:14.567701 | orchestrator | 2026-03-30 01:34:14 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:16.937190 | orchestrator | 2026-03-30 01:34:16 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:19.242316 | orchestrator | 2026-03-30 01:34:19 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:21.583705 | orchestrator | 2026-03-30 01:34:21 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:23.860803 | orchestrator | 2026-03-30 01:34:23 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:26.173818 | orchestrator | 2026-03-30 01:34:26 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:28.534215 | orchestrator | 2026-03-30 01:34:28 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:30.818252 | orchestrator | 2026-03-30 01:34:30 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) is still in progress 2026-03-30 01:34:33.171177 | orchestrator | 2026-03-30 01:34:33 | INFO  | Live migration of dac33b73-ab4e-4af2-bd0c-52da790e5c25 (test-4) completed with status ACTIVE 2026-03-30 01:34:33.171247 | orchestrator | 2026-03-30 01:34:33 | INFO  | Live migrating server 18c36eb5-ffb2-42f3-934d-dd9017616912 2026-03-30 01:34:44.637029 | orchestrator | 2026-03-30 01:34:44 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:34:46.982393 | orchestrator | 2026-03-30 01:34:46 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:34:49.346290 | orchestrator | 2026-03-30 01:34:49 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:34:51.677505 | orchestrator | 2026-03-30 01:34:51 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:34:53.972689 | orchestrator | 2026-03-30 01:34:53 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:34:56.296343 | orchestrator | 2026-03-30 01:34:56 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:34:58.534991 | orchestrator | 2026-03-30 01:34:58 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:35:00.791051 | orchestrator | 2026-03-30 01:35:00 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) is still in progress 2026-03-30 01:35:03.084698 | orchestrator | 2026-03-30 01:35:03 | INFO  | Live migration of 18c36eb5-ffb2-42f3-934d-dd9017616912 (test-3) completed with status ACTIVE 2026-03-30 01:35:03.084779 | orchestrator | 2026-03-30 01:35:03 | INFO  | Live migrating server 1547377a-973c-4df3-8d29-4d6be7c4c5f3 2026-03-30 01:35:12.754199 | orchestrator | 2026-03-30 01:35:12 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:15.111459 | orchestrator | 2026-03-30 01:35:15 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:17.486106 | orchestrator | 2026-03-30 01:35:17 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:19.822071 | orchestrator | 2026-03-30 01:35:19 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:22.139845 | orchestrator | 2026-03-30 01:35:22 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:24.425872 | orchestrator | 2026-03-30 01:35:24 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:26.736900 | orchestrator | 2026-03-30 01:35:26 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:29.030229 | orchestrator | 2026-03-30 01:35:29 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:31.324223 | orchestrator | 2026-03-30 01:35:31 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:33.673629 | orchestrator | 2026-03-30 01:35:33 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) is still in progress 2026-03-30 01:35:36.091592 | orchestrator | 2026-03-30 01:35:36 | INFO  | Live migration of 1547377a-973c-4df3-8d29-4d6be7c4c5f3 (test-1) completed with status ACTIVE 2026-03-30 01:35:36.091708 | orchestrator | 2026-03-30 01:35:36 | INFO  | Live migrating server 488a0aa7-3295-4aaa-8881-32fc3a740872 2026-03-30 01:35:46.371386 | orchestrator | 2026-03-30 01:35:46 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:35:48.757764 | orchestrator | 2026-03-30 01:35:48 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:35:51.093159 | orchestrator | 2026-03-30 01:35:51 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:35:53.362467 | orchestrator | 2026-03-30 01:35:53 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:35:55.727758 | orchestrator | 2026-03-30 01:35:55 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:35:58.015689 | orchestrator | 2026-03-30 01:35:58 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:36:00.308841 | orchestrator | 2026-03-30 01:36:00 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:36:02.598380 | orchestrator | 2026-03-30 01:36:02 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) is still in progress 2026-03-30 01:36:04.984059 | orchestrator | 2026-03-30 01:36:04 | INFO  | Live migration of 488a0aa7-3295-4aaa-8881-32fc3a740872 (test-2) completed with status ACTIVE 2026-03-30 01:36:04.984172 | orchestrator | 2026-03-30 01:36:04 | INFO  | Live migrating server 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a 2026-03-30 01:36:15.740423 | orchestrator | 2026-03-30 01:36:15 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:18.078833 | orchestrator | 2026-03-30 01:36:18 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:20.489968 | orchestrator | 2026-03-30 01:36:20 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:22.892580 | orchestrator | 2026-03-30 01:36:22 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:25.171243 | orchestrator | 2026-03-30 01:36:25 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:27.582648 | orchestrator | 2026-03-30 01:36:27 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:29.880138 | orchestrator | 2026-03-30 01:36:29 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:32.228015 | orchestrator | 2026-03-30 01:36:32 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:34.553164 | orchestrator | 2026-03-30 01:36:34 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:36.940987 | orchestrator | 2026-03-30 01:36:36 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) is still in progress 2026-03-30 01:36:39.247180 | orchestrator | 2026-03-30 01:36:39 | INFO  | Live migration of 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a (test) completed with status ACTIVE 2026-03-30 01:36:39.510298 | orchestrator | + compute_list 2026-03-30 01:36:39.510466 | orchestrator | + osism manage compute list testbed-node-3 2026-03-30 01:36:41.130243 | orchestrator | 2026-03-30 01:36:41 | ERROR  | Unable to get ansible vault password 2026-03-30 01:36:41.130350 | orchestrator | 2026-03-30 01:36:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:36:41.130368 | orchestrator | 2026-03-30 01:36:41 | ERROR  | Dropping encrypted entries 2026-03-30 01:36:42.421172 | orchestrator | +------+--------+----------+ 2026-03-30 01:36:42.421270 | orchestrator | | ID | Name | Status | 2026-03-30 01:36:42.421281 | orchestrator | |------+--------+----------| 2026-03-30 01:36:42.421288 | orchestrator | +------+--------+----------+ 2026-03-30 01:36:42.696274 | orchestrator | + osism manage compute list testbed-node-4 2026-03-30 01:36:44.258820 | orchestrator | 2026-03-30 01:36:44 | ERROR  | Unable to get ansible vault password 2026-03-30 01:36:44.258907 | orchestrator | 2026-03-30 01:36:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:36:44.259157 | orchestrator | 2026-03-30 01:36:44 | ERROR  | Dropping encrypted entries 2026-03-30 01:36:45.454214 | orchestrator | +------+--------+----------+ 2026-03-30 01:36:45.454315 | orchestrator | | ID | Name | Status | 2026-03-30 01:36:45.454328 | orchestrator | |------+--------+----------| 2026-03-30 01:36:45.454337 | orchestrator | +------+--------+----------+ 2026-03-30 01:36:45.752310 | orchestrator | + osism manage compute list testbed-node-5 2026-03-30 01:36:47.368658 | orchestrator | 2026-03-30 01:36:47 | ERROR  | Unable to get ansible vault password 2026-03-30 01:36:47.368752 | orchestrator | 2026-03-30 01:36:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-30 01:36:47.368766 | orchestrator | 2026-03-30 01:36:47 | ERROR  | Dropping encrypted entries 2026-03-30 01:36:48.927817 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:36:48.927918 | orchestrator | | ID | Name | Status | 2026-03-30 01:36:48.927933 | orchestrator | |--------------------------------------+--------+----------| 2026-03-30 01:36:48.927944 | orchestrator | | dac33b73-ab4e-4af2-bd0c-52da790e5c25 | test-4 | ACTIVE | 2026-03-30 01:36:48.927955 | orchestrator | | 18c36eb5-ffb2-42f3-934d-dd9017616912 | test-3 | ACTIVE | 2026-03-30 01:36:48.927966 | orchestrator | | 1547377a-973c-4df3-8d29-4d6be7c4c5f3 | test-1 | ACTIVE | 2026-03-30 01:36:48.927977 | orchestrator | | 488a0aa7-3295-4aaa-8881-32fc3a740872 | test-2 | ACTIVE | 2026-03-30 01:36:48.927987 | orchestrator | | 275e5e21-cdc7-4266-8fcd-40e56f2ddf0a | test | ACTIVE | 2026-03-30 01:36:48.927998 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-30 01:36:49.248820 | orchestrator | + server_ping 2026-03-30 01:36:49.250205 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-30 01:36:49.250688 | orchestrator | ++ tr -d '\r' 2026-03-30 01:36:52.337836 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:36:52.337967 | orchestrator | + ping -c3 192.168.112.168 2026-03-30 01:36:52.346587 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-03-30 01:36:52.346669 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=6.55 ms 2026-03-30 01:36:53.344816 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.37 ms 2026-03-30 01:36:54.346500 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.49 ms 2026-03-30 01:36:54.346597 | orchestrator | 2026-03-30 01:36:54.346612 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-03-30 01:36:54.346624 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:36:54.346634 | orchestrator | rtt min/avg/max/mdev = 2.372/3.803/6.552/1.944 ms 2026-03-30 01:36:54.347012 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:36:54.347044 | orchestrator | + ping -c3 192.168.112.162 2026-03-30 01:36:54.363637 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-30 01:36:54.363739 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=12.8 ms 2026-03-30 01:36:55.355266 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.77 ms 2026-03-30 01:36:56.356818 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=2.17 ms 2026-03-30 01:36:56.356913 | orchestrator | 2026-03-30 01:36:56.356924 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-30 01:36:56.356932 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:36:56.356939 | orchestrator | rtt min/avg/max/mdev = 2.166/5.902/12.770/4.862 ms 2026-03-30 01:36:56.356947 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:36:56.356954 | orchestrator | + ping -c3 192.168.112.195 2026-03-30 01:36:56.371257 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-03-30 01:36:56.371387 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=9.76 ms 2026-03-30 01:36:57.365340 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.81 ms 2026-03-30 01:36:58.366786 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.21 ms 2026-03-30 01:36:58.367001 | orchestrator | 2026-03-30 01:36:58.367028 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-03-30 01:36:58.367042 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:36:58.367053 | orchestrator | rtt min/avg/max/mdev = 2.210/4.926/9.755/3.423 ms 2026-03-30 01:36:58.367170 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:36:58.367188 | orchestrator | + ping -c3 192.168.112.179 2026-03-30 01:36:58.382306 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-30 01:36:58.382402 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=9.41 ms 2026-03-30 01:36:59.376737 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.41 ms 2026-03-30 01:37:00.377340 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.52 ms 2026-03-30 01:37:00.377518 | orchestrator | 2026-03-30 01:37:00.377537 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-30 01:37:00.377551 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:37:00.377564 | orchestrator | rtt min/avg/max/mdev = 1.518/4.446/9.411/3.529 ms 2026-03-30 01:37:00.377585 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-30 01:37:00.377605 | orchestrator | + ping -c3 192.168.112.134 2026-03-30 01:37:00.384009 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-30 01:37:00.384103 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=4.59 ms 2026-03-30 01:37:01.383789 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.41 ms 2026-03-30 01:37:02.385338 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.98 ms 2026-03-30 01:37:02.385526 | orchestrator | 2026-03-30 01:37:02.385558 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-30 01:37:02.385579 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-30 01:37:02.385634 | orchestrator | rtt min/avg/max/mdev = 1.976/2.994/4.594/1.144 ms 2026-03-30 01:37:02.537944 | orchestrator | ok: Runtime: 0:22:03.432038 2026-03-30 01:37:02.591720 | 2026-03-30 01:37:02.591890 | TASK [Run tempest] 2026-03-30 01:37:03.328636 | orchestrator | + set -e 2026-03-30 01:37:03.328844 | orchestrator | + source /opt/manager-vars.sh 2026-03-30 01:37:03.328884 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-30 01:37:03.328908 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-30 01:37:03.328930 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-30 01:37:03.328954 | orchestrator | ++ CEPH_VERSION=reef 2026-03-30 01:37:03.328976 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-30 01:37:03.329028 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-30 01:37:03.329051 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-30 01:37:03.329073 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-30 01:37:03.329085 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-30 01:37:03.329104 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-30 01:37:03.329120 | orchestrator | ++ export ARA=false 2026-03-30 01:37:03.329186 | orchestrator | ++ ARA=false 2026-03-30 01:37:03.329205 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-30 01:37:03.329216 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-30 01:37:03.329227 | orchestrator | ++ export TEMPEST=true 2026-03-30 01:37:03.329242 | orchestrator | ++ TEMPEST=true 2026-03-30 01:37:03.329253 | orchestrator | ++ export IS_ZUUL=true 2026-03-30 01:37:03.329264 | orchestrator | ++ IS_ZUUL=true 2026-03-30 01:37:03.329276 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 01:37:03.329288 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.232 2026-03-30 01:37:03.329299 | orchestrator | ++ export EXTERNAL_API=false 2026-03-30 01:37:03.329310 | orchestrator | ++ EXTERNAL_API=false 2026-03-30 01:37:03.329321 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-30 01:37:03.329331 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-30 01:37:03.329342 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-30 01:37:03.329353 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-30 01:37:03.329364 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-30 01:37:03.329375 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-30 01:37:03.329387 | orchestrator | 2026-03-30 01:37:03.329398 | orchestrator | # Tempest 2026-03-30 01:37:03.329409 | orchestrator | 2026-03-30 01:37:03.329450 | orchestrator | + echo 2026-03-30 01:37:03.329462 | orchestrator | + echo '# Tempest' 2026-03-30 01:37:03.329474 | orchestrator | + echo 2026-03-30 01:37:03.329485 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-30 01:37:03.329512 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-30 01:37:14.597103 | orchestrator | 2026-03-30 01:37:14 | INFO  | Prepare task for execution of tempest. 2026-03-30 01:37:14.672840 | orchestrator | 2026-03-30 01:37:14 | INFO  | Task c0e51258-f666-4059-8f20-3187aa416873 (tempest) was prepared for execution. 2026-03-30 01:37:14.672970 | orchestrator | 2026-03-30 01:37:14 | INFO  | It takes a moment until task c0e51258-f666-4059-8f20-3187aa416873 (tempest) has been started and output is visible here. 2026-03-30 01:38:28.349647 | orchestrator | 2026-03-30 01:38:28.349776 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-30 01:38:28.349793 | orchestrator | 2026-03-30 01:38:28.349805 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-30 01:38:28.349831 | orchestrator | Monday 30 March 2026 01:37:17 +0000 (0:00:00.322) 0:00:00.322 ********** 2026-03-30 01:38:28.349842 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.349854 | orchestrator | 2026-03-30 01:38:28.349865 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-30 01:38:28.349876 | orchestrator | Monday 30 March 2026 01:37:19 +0000 (0:00:01.070) 0:00:01.392 ********** 2026-03-30 01:38:28.349887 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.349898 | orchestrator | 2026-03-30 01:38:28.349909 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-30 01:38:28.349919 | orchestrator | Monday 30 March 2026 01:37:20 +0000 (0:00:01.138) 0:00:02.531 ********** 2026-03-30 01:38:28.349930 | orchestrator | ok: [testbed-manager] 2026-03-30 01:38:28.349941 | orchestrator | 2026-03-30 01:38:28.349952 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-30 01:38:28.349963 | orchestrator | Monday 30 March 2026 01:37:20 +0000 (0:00:00.427) 0:00:02.959 ********** 2026-03-30 01:38:28.349976 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.349996 | orchestrator | 2026-03-30 01:38:28.350079 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-30 01:38:28.350104 | orchestrator | Monday 30 March 2026 01:37:40 +0000 (0:00:19.515) 0:00:22.474 ********** 2026-03-30 01:38:28.350161 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-30 01:38:28.350180 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-30 01:38:28.350202 | orchestrator | 2026-03-30 01:38:28.350220 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-30 01:38:28.350239 | orchestrator | Monday 30 March 2026 01:37:48 +0000 (0:00:08.524) 0:00:30.998 ********** 2026-03-30 01:38:28.350259 | orchestrator | ok: [testbed-manager] => { 2026-03-30 01:38:28.350276 | orchestrator |  "changed": false, 2026-03-30 01:38:28.350294 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:38:28.350312 | orchestrator | } 2026-03-30 01:38:28.350330 | orchestrator | 2026-03-30 01:38:28.350349 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-30 01:38:28.350367 | orchestrator | Monday 30 March 2026 01:37:48 +0000 (0:00:00.149) 0:00:31.148 ********** 2026-03-30 01:38:28.350386 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350404 | orchestrator | 2026-03-30 01:38:28.350481 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-30 01:38:28.350497 | orchestrator | Monday 30 March 2026 01:37:52 +0000 (0:00:03.547) 0:00:34.695 ********** 2026-03-30 01:38:28.350508 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350518 | orchestrator | 2026-03-30 01:38:28.350529 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-30 01:38:28.350540 | orchestrator | Monday 30 March 2026 01:37:54 +0000 (0:00:01.801) 0:00:36.497 ********** 2026-03-30 01:38:28.350551 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350561 | orchestrator | 2026-03-30 01:38:28.350573 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-30 01:38:28.350584 | orchestrator | Monday 30 March 2026 01:37:57 +0000 (0:00:03.641) 0:00:40.139 ********** 2026-03-30 01:38:28.350594 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350605 | orchestrator | 2026-03-30 01:38:28.350616 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-30 01:38:28.350626 | orchestrator | Monday 30 March 2026 01:37:57 +0000 (0:00:00.179) 0:00:40.318 ********** 2026-03-30 01:38:28.350637 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.350648 | orchestrator | 2026-03-30 01:38:28.350659 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-30 01:38:28.350670 | orchestrator | Monday 30 March 2026 01:38:00 +0000 (0:00:02.534) 0:00:42.853 ********** 2026-03-30 01:38:28.350681 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.350691 | orchestrator | 2026-03-30 01:38:28.350702 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-30 01:38:28.350713 | orchestrator | Monday 30 March 2026 01:38:08 +0000 (0:00:08.406) 0:00:51.259 ********** 2026-03-30 01:38:28.350723 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.350734 | orchestrator | 2026-03-30 01:38:28.350745 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-30 01:38:28.350756 | orchestrator | Monday 30 March 2026 01:38:09 +0000 (0:00:00.676) 0:00:51.936 ********** 2026-03-30 01:38:28.350766 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350777 | orchestrator | 2026-03-30 01:38:28.350788 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-30 01:38:28.350799 | orchestrator | Monday 30 March 2026 01:38:11 +0000 (0:00:01.531) 0:00:53.467 ********** 2026-03-30 01:38:28.350809 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350820 | orchestrator | 2026-03-30 01:38:28.350831 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-30 01:38:28.350842 | orchestrator | Monday 30 March 2026 01:38:12 +0000 (0:00:01.551) 0:00:55.019 ********** 2026-03-30 01:38:28.350853 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350863 | orchestrator | 2026-03-30 01:38:28.350874 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-30 01:38:28.350898 | orchestrator | Monday 30 March 2026 01:38:12 +0000 (0:00:00.171) 0:00:55.190 ********** 2026-03-30 01:38:28.350909 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350919 | orchestrator | 2026-03-30 01:38:28.350942 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-30 01:38:28.350953 | orchestrator | Monday 30 March 2026 01:38:13 +0000 (0:00:00.339) 0:00:55.529 ********** 2026-03-30 01:38:28.350964 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-30 01:38:28.350974 | orchestrator | 2026-03-30 01:38:28.350985 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-30 01:38:28.351021 | orchestrator | Monday 30 March 2026 01:38:17 +0000 (0:00:03.896) 0:00:59.426 ********** 2026-03-30 01:38:28.351033 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-30 01:38:28.351044 | orchestrator |  "changed": false, 2026-03-30 01:38:28.351055 | orchestrator |  "msg": "All assertions passed" 2026-03-30 01:38:28.351066 | orchestrator | } 2026-03-30 01:38:28.351076 | orchestrator | 2026-03-30 01:38:28.351088 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-30 01:38:28.351099 | orchestrator | Monday 30 March 2026 01:38:17 +0000 (0:00:00.174) 0:00:59.600 ********** 2026-03-30 01:38:28.351110 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-30 01:38:28.351122 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-30 01:38:28.351132 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:38:28.351143 | orchestrator | 2026-03-30 01:38:28.351154 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-30 01:38:28.351168 | orchestrator | Monday 30 March 2026 01:38:17 +0000 (0:00:00.188) 0:00:59.789 ********** 2026-03-30 01:38:28.351187 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:38:28.351207 | orchestrator | 2026-03-30 01:38:28.351226 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-30 01:38:28.351245 | orchestrator | Monday 30 March 2026 01:38:17 +0000 (0:00:00.165) 0:00:59.954 ********** 2026-03-30 01:38:28.351264 | orchestrator | ok: [testbed-manager] 2026-03-30 01:38:28.351284 | orchestrator | 2026-03-30 01:38:28.351305 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-30 01:38:28.351325 | orchestrator | Monday 30 March 2026 01:38:18 +0000 (0:00:00.481) 0:01:00.436 ********** 2026-03-30 01:38:28.351347 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.351366 | orchestrator | 2026-03-30 01:38:28.351381 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-30 01:38:28.351392 | orchestrator | Monday 30 March 2026 01:38:18 +0000 (0:00:00.845) 0:01:01.281 ********** 2026-03-30 01:38:28.351403 | orchestrator | ok: [testbed-manager] 2026-03-30 01:38:28.351413 | orchestrator | 2026-03-30 01:38:28.351443 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-30 01:38:28.351454 | orchestrator | Monday 30 March 2026 01:38:19 +0000 (0:00:00.411) 0:01:01.693 ********** 2026-03-30 01:38:28.351465 | orchestrator | skipping: [testbed-manager] 2026-03-30 01:38:28.351476 | orchestrator | 2026-03-30 01:38:28.351486 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-30 01:38:28.351497 | orchestrator | Monday 30 March 2026 01:38:19 +0000 (0:00:00.295) 0:01:01.988 ********** 2026-03-30 01:38:28.351507 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-30 01:38:28.351532 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-30 01:38:28.351553 | orchestrator | 2026-03-30 01:38:28.351565 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-30 01:38:28.351576 | orchestrator | Monday 30 March 2026 01:38:27 +0000 (0:00:07.744) 0:01:09.732 ********** 2026-03-30 01:38:28.351586 | orchestrator | changed: [testbed-manager] 2026-03-30 01:38:28.351607 | orchestrator | 2026-03-30 01:38:28.351618 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-30 01:38:28.351630 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-30 01:38:28.351642 | orchestrator | 2026-03-30 01:38:28.351653 | orchestrator | 2026-03-30 01:38:28.351664 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-30 01:38:28.351674 | orchestrator | Monday 30 March 2026 01:38:28 +0000 (0:00:00.967) 0:01:10.700 ********** 2026-03-30 01:38:28.351685 | orchestrator | =============================================================================== 2026-03-30 01:38:28.351695 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 19.52s 2026-03-30 01:38:28.351706 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.52s 2026-03-30 01:38:28.351716 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.41s 2026-03-30 01:38:28.351727 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.74s 2026-03-30 01:38:28.351745 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.90s 2026-03-30 01:38:28.351756 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.64s 2026-03-30 01:38:28.351766 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.55s 2026-03-30 01:38:28.351777 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.53s 2026-03-30 01:38:28.351787 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.80s 2026-03-30 01:38:28.351798 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.55s 2026-03-30 01:38:28.351809 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.53s 2026-03-30 01:38:28.351819 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.14s 2026-03-30 01:38:28.351830 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.07s 2026-03-30 01:38:28.351840 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.97s 2026-03-30 01:38:28.351851 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.85s 2026-03-30 01:38:28.351861 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.68s 2026-03-30 01:38:28.351872 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.48s 2026-03-30 01:38:28.351898 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.43s 2026-03-30 01:38:28.553777 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.41s 2026-03-30 01:38:28.553887 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.34s 2026-03-30 01:38:28.719830 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-30 01:38:28.724039 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-30 01:38:28.728765 | orchestrator | 2026-03-30 01:38:28.728822 | orchestrator | ## IDENTITY (API) 2026-03-30 01:38:28.728830 | orchestrator | 2026-03-30 01:38:28.728836 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-30 01:38:28.728842 | orchestrator | + echo 2026-03-30 01:38:28.728848 | orchestrator | + echo '## IDENTITY (API)' 2026-03-30 01:38:28.728853 | orchestrator | + echo 2026-03-30 01:38:28.728861 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-30 01:38:28.728871 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-30 01:38:28.729373 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-30 01:38:28.730103 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:28.733135 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:32.348725 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:32.348847 | orchestrator | Did you mean one of these? 2026-03-30 01:38:32.348866 | orchestrator | help 2026-03-30 01:38:32.348877 | orchestrator | init 2026-03-30 01:38:32.610186 | orchestrator | 2026-03-30 01:38:32.610310 | orchestrator | ## IMAGE (API) 2026-03-30 01:38:32.610325 | orchestrator | 2026-03-30 01:38:32.610338 | orchestrator | + echo 2026-03-30 01:38:32.610349 | orchestrator | + echo '## IMAGE (API)' 2026-03-30 01:38:32.610361 | orchestrator | + echo 2026-03-30 01:38:32.610372 | orchestrator | + _tempest tempest.api.image.v2 2026-03-30 01:38:32.610384 | orchestrator | + local regex=tempest.api.image.v2 2026-03-30 01:38:32.610918 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-30 01:38:32.612006 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:32.617108 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:35.818869 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:35.818984 | orchestrator | Did you mean one of these? 2026-03-30 01:38:35.819000 | orchestrator | help 2026-03-30 01:38:35.819010 | orchestrator | init 2026-03-30 01:38:36.081556 | orchestrator | 2026-03-30 01:38:36.081670 | orchestrator | ## NETWORK (API) 2026-03-30 01:38:36.081687 | orchestrator | 2026-03-30 01:38:36.081708 | orchestrator | + echo 2026-03-30 01:38:36.081728 | orchestrator | + echo '## NETWORK (API)' 2026-03-30 01:38:36.081745 | orchestrator | + echo 2026-03-30 01:38:36.081757 | orchestrator | + _tempest tempest.api.network 2026-03-30 01:38:36.081768 | orchestrator | + local regex=tempest.api.network 2026-03-30 01:38:36.081797 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-30 01:38:36.081883 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:36.083852 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:39.268902 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:39.269035 | orchestrator | Did you mean one of these? 2026-03-30 01:38:39.269053 | orchestrator | help 2026-03-30 01:38:39.269066 | orchestrator | init 2026-03-30 01:38:39.568151 | orchestrator | 2026-03-30 01:38:39.568253 | orchestrator | ## VOLUME (API) 2026-03-30 01:38:39.568269 | orchestrator | 2026-03-30 01:38:39.568282 | orchestrator | + echo 2026-03-30 01:38:39.568293 | orchestrator | + echo '## VOLUME (API)' 2026-03-30 01:38:39.568306 | orchestrator | + echo 2026-03-30 01:38:39.568317 | orchestrator | + _tempest tempest.api.volume 2026-03-30 01:38:39.568328 | orchestrator | + local regex=tempest.api.volume 2026-03-30 01:38:39.568370 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-30 01:38:39.568728 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:39.571028 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:43.221480 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:43.221575 | orchestrator | Did you mean one of these? 2026-03-30 01:38:43.221587 | orchestrator | help 2026-03-30 01:38:43.221595 | orchestrator | init 2026-03-30 01:38:43.621615 | orchestrator | 2026-03-30 01:38:43.621714 | orchestrator | ## COMPUTE (API) 2026-03-30 01:38:43.621733 | orchestrator | 2026-03-30 01:38:43.621747 | orchestrator | + echo 2026-03-30 01:38:43.621758 | orchestrator | + echo '## COMPUTE (API)' 2026-03-30 01:38:43.621771 | orchestrator | + echo 2026-03-30 01:38:43.621782 | orchestrator | + _tempest tempest.api.compute 2026-03-30 01:38:43.621824 | orchestrator | + local regex=tempest.api.compute 2026-03-30 01:38:43.622311 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-30 01:38:43.622652 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:43.625389 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:47.133295 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:47.133401 | orchestrator | Did you mean one of these? 2026-03-30 01:38:47.133418 | orchestrator | help 2026-03-30 01:38:47.133430 | orchestrator | init 2026-03-30 01:38:47.455523 | orchestrator | 2026-03-30 01:38:47.455598 | orchestrator | ## DNS (API) 2026-03-30 01:38:47.455605 | orchestrator | 2026-03-30 01:38:47.455610 | orchestrator | + echo 2026-03-30 01:38:47.455614 | orchestrator | + echo '## DNS (API)' 2026-03-30 01:38:47.455619 | orchestrator | + echo 2026-03-30 01:38:47.455623 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-30 01:38:47.455629 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-30 01:38:47.456839 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-30 01:38:47.457190 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:47.459965 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:50.937367 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:50.937516 | orchestrator | Did you mean one of these? 2026-03-30 01:38:50.937531 | orchestrator | help 2026-03-30 01:38:50.937540 | orchestrator | init 2026-03-30 01:38:51.287036 | orchestrator | 2026-03-30 01:38:51.287104 | orchestrator | ## OBJECT-STORE (API) 2026-03-30 01:38:51.287110 | orchestrator | 2026-03-30 01:38:51.287114 | orchestrator | + echo 2026-03-30 01:38:51.287119 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-30 01:38:51.287123 | orchestrator | + echo 2026-03-30 01:38:51.287127 | orchestrator | + _tempest tempest.api.object_storage 2026-03-30 01:38:51.287132 | orchestrator | + local regex=tempest.api.object_storage 2026-03-30 01:38:51.287858 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-30 01:38:51.288925 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-30 01:38:51.291927 | orchestrator | + tee -a /opt/tempest/20260330-0138.log 2026-03-30 01:38:54.744826 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-30 01:38:54.744959 | orchestrator | Did you mean one of these? 2026-03-30 01:38:54.744992 | orchestrator | help 2026-03-30 01:38:54.745015 | orchestrator | init 2026-03-30 01:38:55.213760 | orchestrator | ok: Runtime: 0:01:52.187434 2026-03-30 01:38:55.238880 | 2026-03-30 01:38:55.239070 | TASK [Check prometheus alert status] 2026-03-30 01:38:55.776816 | orchestrator | skipping: Conditional result was False 2026-03-30 01:38:55.780423 | 2026-03-30 01:38:55.780593 | PLAY RECAP 2026-03-30 01:38:55.780743 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-30 01:38:55.780808 | 2026-03-30 01:38:55.995633 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-30 01:38:55.996779 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-30 01:38:56.743018 | 2026-03-30 01:38:56.743839 | PLAY [Post output play] 2026-03-30 01:38:56.760404 | 2026-03-30 01:38:56.760598 | LOOP [stage-output : Register sources] 2026-03-30 01:38:56.822596 | 2026-03-30 01:38:56.822856 | TASK [stage-output : Check sudo] 2026-03-30 01:38:57.666472 | orchestrator | sudo: a password is required 2026-03-30 01:38:57.858697 | orchestrator | ok: Runtime: 0:00:00.013521 2026-03-30 01:38:57.874062 | 2026-03-30 01:38:57.874225 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-30 01:38:57.911705 | 2026-03-30 01:38:57.911963 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-30 01:38:57.979681 | orchestrator | ok 2026-03-30 01:38:57.988173 | 2026-03-30 01:38:57.988296 | LOOP [stage-output : Ensure target folders exist] 2026-03-30 01:38:58.437825 | orchestrator | ok: "docs" 2026-03-30 01:38:58.438150 | 2026-03-30 01:38:58.704884 | orchestrator | ok: "artifacts" 2026-03-30 01:38:58.967350 | orchestrator | ok: "logs" 2026-03-30 01:38:58.976979 | 2026-03-30 01:38:58.977116 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-30 01:38:59.000488 | 2026-03-30 01:38:59.000679 | TASK [stage-output : Make all log files readable] 2026-03-30 01:38:59.306041 | orchestrator | ok 2026-03-30 01:38:59.316162 | 2026-03-30 01:38:59.316369 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-30 01:38:59.351351 | orchestrator | skipping: Conditional result was False 2026-03-30 01:38:59.370656 | 2026-03-30 01:38:59.370908 | TASK [stage-output : Discover log files for compression] 2026-03-30 01:38:59.396403 | orchestrator | skipping: Conditional result was False 2026-03-30 01:38:59.404971 | 2026-03-30 01:38:59.405119 | LOOP [stage-output : Archive everything from logs] 2026-03-30 01:38:59.460586 | 2026-03-30 01:38:59.460820 | PLAY [Post cleanup play] 2026-03-30 01:38:59.470590 | 2026-03-30 01:38:59.470716 | TASK [Set cloud fact (Zuul deployment)] 2026-03-30 01:38:59.528487 | orchestrator | ok 2026-03-30 01:38:59.540623 | 2026-03-30 01:38:59.540762 | TASK [Set cloud fact (local deployment)] 2026-03-30 01:38:59.576541 | orchestrator | skipping: Conditional result was False 2026-03-30 01:38:59.594061 | 2026-03-30 01:38:59.594218 | TASK [Clean the cloud environment] 2026-03-30 01:39:00.224665 | orchestrator | 2026-03-30 01:39:00 - clean up servers 2026-03-30 01:39:01.006753 | orchestrator | 2026-03-30 01:39:01 - testbed-manager 2026-03-30 01:39:01.095718 | orchestrator | 2026-03-30 01:39:01 - testbed-node-0 2026-03-30 01:39:01.180808 | orchestrator | 2026-03-30 01:39:01 - testbed-node-5 2026-03-30 01:39:01.270863 | orchestrator | 2026-03-30 01:39:01 - testbed-node-2 2026-03-30 01:39:01.356909 | orchestrator | 2026-03-30 01:39:01 - testbed-node-4 2026-03-30 01:39:01.457503 | orchestrator | 2026-03-30 01:39:01 - testbed-node-1 2026-03-30 01:39:01.548833 | orchestrator | 2026-03-30 01:39:01 - testbed-node-3 2026-03-30 01:39:01.634548 | orchestrator | 2026-03-30 01:39:01 - clean up keypairs 2026-03-30 01:39:01.657394 | orchestrator | 2026-03-30 01:39:01 - testbed 2026-03-30 01:39:01.686736 | orchestrator | 2026-03-30 01:39:01 - wait for servers to be gone 2026-03-30 01:39:12.561976 | orchestrator | 2026-03-30 01:39:12 - clean up ports 2026-03-30 01:39:12.750954 | orchestrator | 2026-03-30 01:39:12 - 436aac42-5261-465a-b138-22ed51c8fd47 2026-03-30 01:39:13.040701 | orchestrator | 2026-03-30 01:39:13 - 575a04ae-fbad-432d-9f15-1dce71819142 2026-03-30 01:39:13.282797 | orchestrator | 2026-03-30 01:39:13 - 9eb0288d-fb50-4071-871b-0ff700d20df2 2026-03-30 01:39:13.527523 | orchestrator | 2026-03-30 01:39:13 - a091018f-e568-41b8-95d5-47ab5ce14d7a 2026-03-30 01:39:13.736258 | orchestrator | 2026-03-30 01:39:13 - afa78edc-8b79-4a33-b6c7-a0d97534e4d0 2026-03-30 01:39:13.955321 | orchestrator | 2026-03-30 01:39:13 - bea88e12-5a2d-476c-93ca-f1ce3231df8b 2026-03-30 01:39:14.162736 | orchestrator | 2026-03-30 01:39:14 - fd4b5611-9510-449f-b8f7-d6f2aee0660a 2026-03-30 01:39:14.585230 | orchestrator | 2026-03-30 01:39:14 - clean up volumes 2026-03-30 01:39:14.709869 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-manager-base 2026-03-30 01:39:14.749289 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-0-node-base 2026-03-30 01:39:14.794672 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-3-node-base 2026-03-30 01:39:14.839353 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-4-node-base 2026-03-30 01:39:14.885507 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-2-node-base 2026-03-30 01:39:14.928131 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-1-node-base 2026-03-30 01:39:14.972774 | orchestrator | 2026-03-30 01:39:14 - testbed-volume-5-node-base 2026-03-30 01:39:15.015196 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-1-node-4 2026-03-30 01:39:15.066120 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-4-node-4 2026-03-30 01:39:15.115332 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-7-node-4 2026-03-30 01:39:15.160990 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-3-node-3 2026-03-30 01:39:15.204593 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-2-node-5 2026-03-30 01:39:15.250854 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-5-node-5 2026-03-30 01:39:15.297465 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-0-node-3 2026-03-30 01:39:15.340450 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-6-node-3 2026-03-30 01:39:15.385799 | orchestrator | 2026-03-30 01:39:15 - testbed-volume-8-node-5 2026-03-30 01:39:15.426127 | orchestrator | 2026-03-30 01:39:15 - disconnect routers 2026-03-30 01:39:15.497213 | orchestrator | 2026-03-30 01:39:15 - testbed 2026-03-30 01:39:16.541947 | orchestrator | 2026-03-30 01:39:16 - clean up subnets 2026-03-30 01:39:16.610456 | orchestrator | 2026-03-30 01:39:16 - subnet-testbed-management 2026-03-30 01:39:16.816348 | orchestrator | 2026-03-30 01:39:16 - clean up networks 2026-03-30 01:39:17.022634 | orchestrator | 2026-03-30 01:39:17 - net-testbed-management 2026-03-30 01:39:17.383803 | orchestrator | 2026-03-30 01:39:17 - clean up security groups 2026-03-30 01:39:17.425802 | orchestrator | 2026-03-30 01:39:17 - testbed-node 2026-03-30 01:39:17.549354 | orchestrator | 2026-03-30 01:39:17 - testbed-management 2026-03-30 01:39:17.757730 | orchestrator | 2026-03-30 01:39:17 - clean up floating ips 2026-03-30 01:39:17.794882 | orchestrator | 2026-03-30 01:39:17 - 81.163.192.232 2026-03-30 01:39:18.180080 | orchestrator | 2026-03-30 01:39:18 - clean up routers 2026-03-30 01:39:18.241732 | orchestrator | 2026-03-30 01:39:18 - testbed 2026-03-30 01:39:19.652775 | orchestrator | ok: Runtime: 0:00:19.348493 2026-03-30 01:39:19.656976 | 2026-03-30 01:39:19.657122 | PLAY RECAP 2026-03-30 01:39:19.657225 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-30 01:39:19.657277 | 2026-03-30 01:39:19.798014 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-30 01:39:19.799190 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-30 01:39:20.548306 | 2026-03-30 01:39:20.548485 | PLAY [Cleanup play] 2026-03-30 01:39:20.565149 | 2026-03-30 01:39:20.565299 | TASK [Set cloud fact (Zuul deployment)] 2026-03-30 01:39:20.621517 | orchestrator | ok 2026-03-30 01:39:20.630569 | 2026-03-30 01:39:20.630727 | TASK [Set cloud fact (local deployment)] 2026-03-30 01:39:20.655179 | orchestrator | skipping: Conditional result was False 2026-03-30 01:39:20.666710 | 2026-03-30 01:39:20.666915 | TASK [Clean the cloud environment] 2026-03-30 01:39:21.835872 | orchestrator | 2026-03-30 01:39:21 - clean up servers 2026-03-30 01:39:22.309759 | orchestrator | 2026-03-30 01:39:22 - clean up keypairs 2026-03-30 01:39:22.328130 | orchestrator | 2026-03-30 01:39:22 - wait for servers to be gone 2026-03-30 01:39:22.369476 | orchestrator | 2026-03-30 01:39:22 - clean up ports 2026-03-30 01:39:22.450199 | orchestrator | 2026-03-30 01:39:22 - clean up volumes 2026-03-30 01:39:22.510746 | orchestrator | 2026-03-30 01:39:22 - disconnect routers 2026-03-30 01:39:22.540075 | orchestrator | 2026-03-30 01:39:22 - clean up subnets 2026-03-30 01:39:22.559825 | orchestrator | 2026-03-30 01:39:22 - clean up networks 2026-03-30 01:39:22.687768 | orchestrator | 2026-03-30 01:39:22 - clean up security groups 2026-03-30 01:39:22.724289 | orchestrator | 2026-03-30 01:39:22 - clean up floating ips 2026-03-30 01:39:22.750976 | orchestrator | 2026-03-30 01:39:22 - clean up routers 2026-03-30 01:39:23.203246 | orchestrator | ok: Runtime: 0:00:01.311345 2026-03-30 01:39:23.205770 | 2026-03-30 01:39:23.205913 | PLAY RECAP 2026-03-30 01:39:23.206000 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-30 01:39:23.206048 | 2026-03-30 01:39:23.336727 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-30 01:39:23.337880 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-30 01:39:24.194895 | 2026-03-30 01:39:24.195059 | PLAY [Base post-fetch] 2026-03-30 01:39:24.210761 | 2026-03-30 01:39:24.210934 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-30 01:39:24.268283 | orchestrator | skipping: Conditional result was False 2026-03-30 01:39:24.284209 | 2026-03-30 01:39:24.284446 | TASK [fetch-output : Set log path for single node] 2026-03-30 01:39:24.343707 | orchestrator | ok 2026-03-30 01:39:24.353479 | 2026-03-30 01:39:24.353680 | LOOP [fetch-output : Ensure local output dirs] 2026-03-30 01:39:24.886603 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/work/logs" 2026-03-30 01:39:25.171391 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/work/artifacts" 2026-03-30 01:39:25.478155 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a0aa70b46ae349658f704d1d7df2bdbe/work/docs" 2026-03-30 01:39:25.499370 | 2026-03-30 01:39:25.499540 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-30 01:39:26.470303 | orchestrator | changed: .d..t...... ./ 2026-03-30 01:39:26.470634 | orchestrator | changed: All items complete 2026-03-30 01:39:26.470693 | 2026-03-30 01:39:27.245406 | orchestrator | changed: .d..t...... ./ 2026-03-30 01:39:27.996461 | orchestrator | changed: .d..t...... ./ 2026-03-30 01:39:28.026474 | 2026-03-30 01:39:28.026639 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-30 01:39:28.066526 | orchestrator | skipping: Conditional result was False 2026-03-30 01:39:28.070486 | orchestrator | skipping: Conditional result was False 2026-03-30 01:39:28.086860 | 2026-03-30 01:39:28.087008 | PLAY RECAP 2026-03-30 01:39:28.087080 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-30 01:39:28.087116 | 2026-03-30 01:39:28.239888 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-30 01:39:28.242548 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-30 01:39:29.069353 | 2026-03-30 01:39:29.069539 | PLAY [Base post] 2026-03-30 01:39:29.085839 | 2026-03-30 01:39:29.086008 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-30 01:39:30.086390 | orchestrator | changed 2026-03-30 01:39:30.108398 | 2026-03-30 01:39:30.108564 | PLAY RECAP 2026-03-30 01:39:30.108941 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-30 01:39:30.109015 | 2026-03-30 01:39:30.250778 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-30 01:39:30.253411 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-30 01:39:31.083128 | 2026-03-30 01:39:31.083300 | PLAY [Base post-logs] 2026-03-30 01:39:31.093858 | 2026-03-30 01:39:31.093995 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-30 01:39:31.582640 | localhost | changed 2026-03-30 01:39:31.601659 | 2026-03-30 01:39:31.601924 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-30 01:39:31.640243 | localhost | ok 2026-03-30 01:39:31.645097 | 2026-03-30 01:39:31.645246 | TASK [Set zuul-log-path fact] 2026-03-30 01:39:31.672604 | localhost | ok 2026-03-30 01:39:31.685886 | 2026-03-30 01:39:31.686040 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-30 01:39:31.724542 | localhost | ok 2026-03-30 01:39:31.730607 | 2026-03-30 01:39:31.730771 | TASK [upload-logs : Create log directories] 2026-03-30 01:39:32.245834 | localhost | changed 2026-03-30 01:39:32.251808 | 2026-03-30 01:39:32.251970 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-30 01:39:32.764320 | localhost -> localhost | ok: Runtime: 0:00:00.007532 2026-03-30 01:39:32.768576 | 2026-03-30 01:39:32.768686 | TASK [upload-logs : Upload logs to log server] 2026-03-30 01:39:33.337275 | localhost | Output suppressed because no_log was given 2026-03-30 01:39:33.342100 | 2026-03-30 01:39:33.342335 | LOOP [upload-logs : Compress console log and json output] 2026-03-30 01:39:33.405268 | localhost | skipping: Conditional result was False 2026-03-30 01:39:33.410487 | localhost | skipping: Conditional result was False 2026-03-30 01:39:33.417909 | 2026-03-30 01:39:33.418091 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-30 01:39:33.467490 | localhost | skipping: Conditional result was False 2026-03-30 01:39:33.468149 | 2026-03-30 01:39:33.471393 | localhost | skipping: Conditional result was False 2026-03-30 01:39:33.479461 | 2026-03-30 01:39:33.479638 | LOOP [upload-logs : Upload console log and json output]